Objectives

The EVI project will tackle fundamental research questions concerning black-box testing of reactive systems. Our goal is to remove four obstacles which thus far prevented the widespread use in industry of the black-box checking approach of Peled, Vardi & Yannakakis. In this approach, an active machine learning algorithm performs tests to construct a state-transition model of a black-box system, and subsequently a model checking algorithm verifies whether this model satisfies some given requirements. The project will address the following questions:

  1. How to incorporate risks on potential errors as part of the requirements? An answer will allow us to find (critical) bugs faster. We will formalise risks as annotations of scenario-based requirements in Behaviour-Driven Development.
  2. How to quantify the evidence for correctness of a learned model, based on the primary evidence obtained through testing? An answer will not only help engineers to decide when to stop testing, but it will also help our algorithms to prioritize tests that effectively decrease the uncertainty about the correctness of (parts of) the hypothesis model.
  3. Can we extend existing model learning algorithms to a setting with nondeterminism and data parameters, and lift the evidence measures to these richer settings? An answer is crucial for dealing with many industrial systems.
  4. Can we develop evidence-driven test generation algorithms that use the evidence obtained from previous tests and risk-annotated requirements? An answer is crucial for scaling the approach to industrial use cases.

We will investigate these questions, construct first prototypes, and evaluate their effectiveness on industrial sized benchmarks. Together, these innovations will lead to a groundbreaking evidence-driven black-box checking approach for fully automated software testing, which allows software engineers to find more bugs faster.