Testing

Testing is a vital part of the software development lifecycle, ranging from small scale unit tests for individual methods, to system-wide end-to-end tests, and up to bigger-picture acceptance tests of requirements. Unlike most verification techniques, testing is often directly applied on-site on the actual software system. It is the most frequently applied verification technique in industry. At the same time, testing is costly and often consumes large chunks of a project's budget. To counteract this discrepancy, testing can be automated to varying degrees.

In FMT, we are interested in this automation. This starts at easing the creation and maintenance of test cases that can be automatically executed on the system. Model-based testing is a more advanced technique where test cases are generated by an algorithm from a model, e.g. a finite state machine or labelled transition system.

Possible research directions are as follows. You could directly improve test generation algorithms for one of the different modelling formalisms. You could also compare existing algorithms. Additionally, you could investigate how to improve testing as part of the software development process, e.g. by linking requirements with (failed) test cases, via Behaviour Driven-Development. Furthermore, you could develop or improve methods that measure and express how much has been tested for some set of executed tests.

In research on testing, theory and practice are relatively close, such that developing new theory and doing case studies are both possible.

Prerequisites

Related Modules

Available Project Proposals

If you are interested in the general topic of Testing, or if have your own project idea related to the topic, please contact us directly. Alternatively, you can also work on one of the following concrete project proposals:

  • Model-based testing an 'unfinished' application - Petra van den Bos

    By testing an application under development early, bugs can be discovered and fixed earlier. This results in a more efficient development process, and prevents major costs compared to bugs found in an application that is already running in use. Model-based testing is a powerful technique for automatic test case generation.

    The aim of this project is to find out how to use model-based testing early in development, when some components of an application have not yet been implemented.

    You can choose one of the following research questions:

    • Modelling: how to model an 'unfinished' application? In particular, how to define where the model ends, i.e. at not-implemented components? Or alternatively: how to model the temporary placeholders for testing unimplemented components: mocks and stubs?
    • Software development process: how to integrate model-based testing in the development process, such that the processes of modelling and development strengthen each other? You may use Behaviour-Driven-Development or Model-Driven Engineering as inspiration.
    • Test case generation: how to generate test cases that avoid testing the non-implemented components?

    The research will consist of the following steps:

    1. Doing a literature study to find out about existing approaches
    2. Developing your own ideas
    3. Formulating an answer on the research question based on the previous two steps
  • Test case generation via game theory - Petra van den Bos

    Supervisor: Petra van den Bos

    A recent paper establishes a connection between model-based testing and 2-player concurrent games. A translation between game strategies and test cases is given. In this research project you find out what game strategy synthesis techniques would be suitable for generating test cases.

    Steps in this research could be as follows:

    • Study the paper.
    • Perform a literature search to find game strategy synthesis techniques.
    • Select a subset of promising game strategy synthesis techniques.
    • Compare game strategy synthesis techniques. This comparison could be done:
      • formally, e.g. by showing coverage of test cases generated from game strategies
      • experimentally, e.g. by implementing a prototype test generation tool for several game strategies, and applying it on a case study.
  • Extracting a testing model from existing test cases - Petra van den Bos

    Software developers are often more comfortable with writing a set of test cases, than writing a (formal, structured) model that can be used for model-based testing. However, model-based testing brings many advantages: a.o. many test cases, and also a wide variety of test cases can be generated automatically. The goal of this project is to make it easier for developers to use model-based testing. Since tests are usually included in existing software projects, you will investigate in this research project how a model can be extracted from existing test cases.

    It may be challenging to develop a (generic) translation from test cases of any software project to a model. To make that easier, you may choose a software project of a particular application area, e.g. websites, that is developed using a certain framework. As a second step, after the model extraction, you could identify gaps in an extracted model, e.g. because there are no tests for a certain part of the software, and hence this could be fixed by writing an extra test. Finally, after obtaining a model that is sufficiently complete, you apply model-based testing algorithms for your extracted model, and compare with results for the existing tests.

    Hence, the steps in this research project could be as follows:

    • Select a software project with existing test cases, from some application area, developed in some framework
    • Search for existing approaches on model extraction from test cases in literature
    • Develop an extraction algorithm, and ways to fix gaps
    • Compare model-based testing of extracted model with results from existing test cases
  • Case Studies in Probabilistic Testing - Marcus Gerhold

    Supervisors: Marcus Gerhold

    Probability plays a role in many different systems: cloud computing, robot navigation algorithms, unreliable communication channels, randomized algorithms and communication protocols etc. When such systems are tested, one does not only check whether the functional behaviour is correct, but also if the probabilities match the specified behaviour. To do so, one collects test statistics (traces) and uses standard statistical procedures (such as hypothesis testing) to see if the behaviour is as desired.
    The goal of this project is to carry out a case study in probabilistic testing, and see how well it performs in practice. 


Contact