Testing is a vital part of the software development lifecycle, ranging from small scale unit tests of individual modules up to bigger-picture acceptance tests of requirements. Unlike most verification techniques, testing is often directly applied on-site on the actual software system. At the same time testing is costly and often consumes large chunks of a project's budget. To counteract this discrepancy testing can be automated to varying degrees.

In FMT we are interested in this automation. Specifically, our research is aimed at testing based on formal models, leading to so called model-based testing (MBT). In particular we're interested in performance evaluations of existing MBT techniques. This research is highlighted by the relatively new field of testing probabilistic systems—systems that inherently require randomized methods to perform their tasks—and we are eager to see recent theory brought to life by the means of case studies.


Related Modules

Available Project Proposals

If you are interested in the general topic of Testing, or if have your own project idea related to the topic, please contact us directly. Alternatively, you can also work on one of the following concrete project proposals:

  • Perfectly Balanced, as All Tests Should Be — Gerhold, Zaytsev

    Supervisors: Marcus Gerhold, Vadim Zaytsev

    Test cases can be generated automatically. Fundamentally, these tests try some inputs and wait what happens next.

    Each test input is chosen with equal likelihood, which is how many commercial testing tools work in practice. However, this need not be the case! These probabilities can be tweaked: One input might occur more frequently than others, or certain behaviour should be tested more thoroughly compared to the rest. This can be learned from past user experience or log files.

    With Weighted Attribute Grammars you can tweak the probabilities in a way that approximates user behaviour.

    See the full project description. Get in touch with your potential future supervisors to discuss details!

  • Testing of Probabilistic Robotics — Gerhold, Stoelinga

    Supervisors: Marcus Gerhold, Mariëlle Stoelinga

    Since the theory of (model-based) probabilistic testing is realitively new, we like to see how well it works in practice. Thus, the goal of this project is to carry out a case study in probabilistic testing. The means of testing this could be a small application, such as a robot navigating probabilistically through a maze, or a randomized encryption protocol, etc. You can also bring your own ideas to the table.

  • Weighed and Found Wanting (to Generate Tests) — Gerhold, Zaytsev

    Supervisors: Marcus Gerhold, Vadim Zaytsev

    Given a grammar of a language and a substantial codebase of programs in that language, annotate the grammar with usage statistics of all language constructs, in order to help and guide the test generation process.

    See the full project description. Get in touch with your potential future supervisors to discuss details!