Software Testing

For background information on the topic as a whole, scroll to the end of this page.

Available Project Proposals

If you are interested in the general topic of Testing, or if have your own project idea related to the topic, please contact us directly. Alternatively, you can also work on one of the following concrete project proposals:

  • Case Studies in Probabilistic Testing (Marcus Gerhold)

    Supervisors: Marcus Gerhold

    Probability plays a role in many different systems: cloud computing, robot navigation algorithms, unreliable communication channels, randomized algorithms and communication protocols etc. When such systems are tested, one does not only check whether the functional behaviour is correct, but also if the probabilities match the specified behaviour. To do so, one collects test statistics (traces) and uses standard statistical procedures (such as hypothesis testing) to see if the behaviour is as desired.
    The goal of this project is to carry out a case study in probabilistic testing, and see how well it performs in practice. 

  • Dialogue Under Test: Testing Conversational AI (Marcus Gerhold)

    Conversational AI systems, such as chatbots and virtual assistants, are becoming increasingly common across various applications, from customer support over education to healthcare. Ensuring their reliability is essential, as failures can lead to user frustration or misinformation. 

    Model-Based Testing (MBT) is a well-estbalished technique that offers a structured approach to testing of (black-box) software systems by using formal models to define expected behaviors and generate test cases.

    The aim of this project is to explore how MBT can be applied to conversational AI to ensure consistency and accuracy in interactions, focusing on the unique challenges posed by natural language processing.

    Some example research questions are the following:

    Dialogue Modeling: How can we create formal models that capture the flow of conversation, handling context changes, user intent, and response generation? In particular, how can the model represent complex dialogue structures found in real-world conversational agents?

    Test Case Generation: What strategies can be used to generate meaningful test cases that simulate realistic conversations? How can we ensure that generated test cases cover key dialogue scenarios, including edge cases where the agent may fail to respond appropriately? (Natural language generation)

    Response Evaluation: How can the behavior of a conversational agent be validated against the formal model? In particular, what metrics or techniques can be used to assess if the agent’s responses align with the expected outcomes defined in the model? (Natural language processing)

    The research will consist of the following steps:

    1. Conducting a literature review to explore existing model-based (testing) techniques that can be applied to conversational AI.
    2. Developing your own approach or methods based on the chosen research question.
    3. Formulating an answer to the research question by applying the developed methods to a conversational AI system.
  • Model-based testing an 'unfinished' application (Petra van den Bos)

    By testing an application under development early, bugs can be discovered and fixed earlier. This results in a more efficient development process, and prevents major costs compared to bugs found in an application that is already running in use. Model-based testing is a powerful technique for automatic test case generation.

    The aim of this project is to find out how to use model-based testing early in development, when some components of an application have not yet been implemented.

    You can choose one of the following research questions:

    • Modelling: how to model an 'unfinished' application? In particular, how to define where the model ends, i.e. at not-implemented components? Or alternatively: how to model the temporary placeholders for testing unimplemented components: mocks and stubs?
    • Software development process: how to integrate model-based testing in the development process, such that the processes of modelling and development strengthen each other? You may use Behaviour-Driven-Development or Model-Driven Engineering as inspiration.
    • Test case generation: how to generate test cases that avoid testing the non-implemented components?

    The research will consist of the following steps:

    1. Doing a literature study to find out about existing approaches
    2. Developing your own ideas
    3. Formulating an answer on the research question based on the previous two steps
  • Embedding model-based testing in a software engineering process (Petra van den Bos)

    Model-based testing is a powerful test automation tool. However, it has not been picked up much by software engineers. There are at least two reason: 

    1. Modelling is a different skill than programming or writing unit tests. People may experience a learning curve when they make a model for model-based testing the first time.
    2. Model-based testing is a technique from academia, i.e. it did not arise as a technique directly from software engineering practices. Therefore it also is not immediately clear how to integrate model-based testing in a software engineering process, e.g. like agile.

    In a bachelor project you will investigate some initial steps that could be done to improve the situation of one of the points. For example you could take a following direction:

    1. You would investigate existing specification languages, design languages, and Domain Specific Languages and compare several of them to identify what would make modelling easier.
    2. You would work out a software engineering process with model-based testing as an integrated component, and identify one or more stages where model-based testing would be especially useful, and find out which software engineering process would be most suitable for integration with model-based testing.

Contact

Background

Testing is a vital part of the software development lifecycle, ranging from small scale unit tests for individual methods, to system-wide end-to-end tests, and up to bigger-picture acceptance tests of requirements. Unlike most verification techniques, testing is often directly applied on-site on the actual software system. It is the most frequently applied verification technique in industry. At the same time, testing is costly and often consumes large chunks of a project's budget. To counteract this discrepancy, testing can be automated to varying degrees.

In FMT, we are interested in this automation. This starts at easing the creation and maintenance of test cases that can be automatically executed on the system. Model-based testing is a more advanced technique where test cases are generated by an algorithm from a model, e.g. a finite state machine or labelled transition system.

Possible research directions are as follows. You could directly improve test generation algorithms for one of the different modelling formalisms. You could also compare existing algorithms. Additionally, you could investigate how to improve testing as part of the software development process, e.g. by linking requirements with (failed) test cases, via Behaviour Driven-Development. Furthermore, you could develop or improve methods that measure and express how much has been tested for some set of executed tests.

In research on testing, theory and practice are relatively close, such that developing new theory and doing case studies are both possible.

Prerequisites

Related Modules