Testing

Testing is a vital part of the software development lifecycle, ranging from small scale unit tests for individual methods, to system-wide end-to-end tests, and up to bigger-picture acceptance tests of requirements. Unlike most verification techniques, testing is often directly applied on-site on the actual software system. It is the most frequently applied verification technique in industry. At the same time, testing is costly and often consumes large chunks of a project's budget. To counteract this discrepancy, testing can be automated to varying degrees.

In FMT, we are interested in this automation. This starts at easing the creation and maintenance of test cases that can be automatically executed on the system. Model-based testing is a more advanced technique where test cases are generated by an algorithm from a model, e.g. a finite state machine or labelled transition system.

Possible research directions are as follows. You could directly improve test generation algorithms for one of the different modelling formalisms. You could also compare existing algorithms. Additionally, you could investigate how to improve testing as part of the software development process, e.g. by linking requirements with (failed) test cases, via Behaviour Driven-Development. Furthermore, you could develop or improve methods that measure and express how much has been tested for some set of executed tests.

In research on testing, theory and practice are relatively close, such that developing new theory and doing case studies are both possible.

Prerequisites

Related Modules

Available Project Proposals

If you are interested in the general topic of Testing, or if have your own project idea related to the topic, please contact us directly. Alternatively, you can also work on one of the following concrete project proposals:

  • Case Studies in Probabilistic Testing

    Supervisors: Marcus Gerhold

    Probability plays a role in many different systems: cloud computing, robot navigation algorithms, unreliable communication channels, randomized algorithms and communication protocols etc. When such systems are tested, one does not only check whether the functional behaviour is correct, but also if the probabilities match the specified behaviour. To do so, one collects test statistics (traces) and uses standard statistical procedures (such as hypothesis testing) to see if the behaviour is as desired.
    The goal of this project is to carry out a case study in probabilistic testing, and see how well it performs in practice. 

  • Quiescence in Timed Automata

    Supervisors: Djurre van der Wal, Mariëlle Stoelinga

    Quiescence, or the absence of outputs, is an important notion in model-based testing. A system-under-test (SUT) must fail a certain test case if an output is required, but the SUT does not provide one. Therefore, absence of outputs if often labelled via a special quiescence label, delta.
    In practice, it is difficult (and even impossible) to determine if a required output will never be given. Therefore, it is implemented by a time out: if no output occurs after M seconds, then it is assumed that no output will occur anymore.
    Therefore, it makes sense to investigate the notion of quiescence by means of timed automata: suppose that we model everything (i.e. the SUT and the requirements) by means of timed automata, do we then get the same results as before, with the delta-labelling approach?
    Tasks:

    • study the concept of quiescence
    • study timed automata, and the tool Uppaal (see www.uppaal.org)
    • transform delta-labelled automata into timed automata
    • compare the results 
  • Incremental Testing of VerCors

    Supervisor: Marieke Huisman

    The VerCors toolset can be used to verify memory safety and functional correctness of concurrent programs written in e.g. Java, OpenCL, OpenMP, and C. Notably, VerCors focuses on verifying different concurrent models (e.g. heterogenous- and homogeneous threading) of programs written in different languages (e.g. GPU kernels, fork/join parallelism, and deterministic parallelism). The VerCors toolset is developed at the University of Twente and is actively maintained. However, making changes/updates to VerCors to solve bugs or to implement new features may easily break other existing features. The VerCors project would benefit from a framework for incremental testing: do changes made to the project break other parts of the project?
    In this project you will develop an approach to systematically program verification tools. In particular, you will think about: what features to test, how to test them, and how to integrate the testing framework into project development. Ideally, this would result in a test suite of simple example programs (some of which should be verifiable, some of which should NOT be verifiable) which would be usable for VerCors, but also for other program verification tool sets.
    References:

  • Test coverage for GPU kernels

    Supervisors: Ben van Werkhoven, Marieke Huisman

    GPU's support massive parallel computing natively. This is a very powerful mechanism, and can achieve large speedups for many (scientific) applications. However, the parallel nature of the programs also introduces the risk of bugs.

    At the eScienceCenter, Kernel Tuner has been developed as a tool to automatically support many advanced use-cases and optimization strategies that speed up the auto-tuning process, including automatic support to execute tests. However, test cases at the moment are mainly created manually. In this project, we would like to investigate how existing test theory can be used and adapted to the specific use case of GPU kernels. In particular, we are interested in investigating what would be useful coverage criteria to estimate the quality of test suites. Also, it is interesting to consider the generation of test cases based on either code inspection or user-defined properties.

Contact