Algorithms and Datastructures for Efficient Analysis of Systems

For background information on the topic as a whole, scroll to the end of this page.

AVAILABLE PROJECT PROPOSALS

If you are interested in the general topic, or if have your own project idea related to the topic, please contact us directly. Also open ideas for high performance algorithms / data structures are welcome. We have a focus on efficient implementations and empirical evaluation of algorithms.

Alternatively, you can also work on one of the following concrete project proposals:

  • Delaying strategy iteration algorithms (Tom van Dijk)

    Supervisor: Tom van Dijk

    We want a better understanding of what makes parity games hard to solve. We do this by finding parity games that delay a parity game solving algorithm as long as possible. To make these games, we need to have general principles on game graphs.

    This topic is specifically about strategy iteration algorithms. These algorithms following a general principle: fix a strategy for one player, then let the other player find the best counterplay according to some evaluation heuristic. Then fix an improved strategy for the first player and repeat the process. In order to delay strategy iterations, a lot of earlier work by Oliver Friedmann and others made use of certain structures in the parity game. Nowadays we have more insights into parity games, such as more creative use of attractors (that can circumvent the features of Friedmann's countergames) and understanding of so-called distractions and tangles. The goal of this study is to apply these new ideas and to find general principles of creating hard to solve parity games for modern strategy iteration algorithms.

  • Does your model make sense? (Ömer Sayilir, Mariëlle Stoelinga)

    Supervisor: Mariëlle Stoelinga, Ömer Sayilir

    Contact: Mariëlle Stoelinga

    Model checking is as good as the models you make: if your models do not reflect reality, then the results of the analysis (e.g. via model checking) are not useful. Thus, model validation is an important task. It is also time consuming, and, since similar properties are verified many times, there is room for automation here.
    The purpose of this assignment is to make model validation easier, by checking a number of properties fully automatically: are all locations reachable? Is the model input-enabled? Can time always progress, even in a worse-case environment?
    These questions can be checked automatically, but their formulation in terms of model queries (posed e.g. in temporal logic) depends on the model. Thus, this project aims at formulating a list of validation properties, and implement these in a model checker, e.g. in Uppaal. 

  • Improving Fault Tree analysis via Binary Decision Diagrams (Milan Lopuhaä-Zwakenberg, Tom van Dijk)

    Supervisors: Tom van Dijk, Milan Lopuhaä-Zwakenberg

    Contact: Milan Lopuhaä-Zwakenberg

    Fault trees are a common formalism to model how failures occur in system components and how they eventually lead to overall system failure. Analysing fault trees reveals how likely system failures are and whether the reliability of components needs to be improved.

    As fault trees can contain thousands of elements, their analysis must be efficient in both time and memory consumption. One common approach is to use binary decision diagrams (BDDs) as a data structure. A BDD is graph which encodes a Boolean function in a compact representation. Translating a fault tree into a BDD yields an efficient representation which allows to analyse large fault trees. However, the memory consumption of the BDD depends on certain factors such as the variable ordering or the order in which smaller BDDs are composed during the translation.

    The goal of this research project is to improve the existing fault tree analysis. The fault tree analysis we look at is implemented in the Storm model checker and uses the Sylvan library to create and handle BDDs. Building upon Sylvan and Storm, we aim to extend and optimize the existing implementation of fault tree analysis via BDDs.

    Tasks

    Possible research tasks are:

    • Find heuristics for good variable orderings which yield small BDDs.
    • Investigate whether different orders for the composition of sub-BDDs affect the peak memory requirement.
    • Investigate whether multiple cores can be further exploited during the analysis.
    • Integrate modular approaches which only require parts of the BDD to be stored in memory at a time.
    • Bring in own ideas.

    Requirements

    • Knowledge about fault trees and binary decision diagrams is beneficial but not required.
    • Experience with C/C++ is strongly recommended as the implementation uses these languages.
  • Safety aspects of decentralised traffic management systems (Moritz Hahn)

    Supervisor: Moritz Hahn

    Traffic Management refers to the combination of measures that serve to preserve traffic capacity and improve the security, safety and reliability of the overall road transport system. These measures make use of ITS systems, services and projects in day-to-day operations that impact on road network performance. As an example, consider a part of a motorway on which cars are supposed to slow down during times of high traffic density or accidents [1, Chapter 7.3].

    A system to implement this requirement will have to have sensors to count the cars passing by and determine their velocity. It will also have to have a number of electronic speed limit signs as know as matrix signs and more generally denoted as actuators. Matrix signs are placed roughly 500 meters hundred meters apart from each other. Each of them will show a the same or a lower speed limit, such that cars will slow down in a controlled manner. The speed limits are calculated from the data obtained from the sensors, so as to adapt to the current traffic situation.

    In the Netherlands traffic management is performed in a centralised manner, i.e. from one of the five traffic management centres in the country. Thus, in the above scenario, there would be a central instance of the system, evaluating the input from all the sensors and controlling all the speed limit displays.

    Migrating towards a more decentralised setup has a number of advantages:

    • Costs. Requirements for fallback on a central system will decrease and it will therefore be cheaper, since the central system will no longer be a single point of failure and becomes more of a backup option (if needed at all).
    • Feasibility. With increasingly complex traffic management being necessary with increasing road utilisation, having an instance which centrally performs all decisions may become too complex.
    • Robustness. The system will be less vulnerable to communication interruption between the centrals system and decentralised components, and to unavailability of the central system itself, as the decentralised sensor and actuators act as individual, independent agents.

    In the above scenario, the individual units of speed signs and traffic sensors would each have their own controller. They would also have means of interaction with their immediate neighbours (both upstream/downstream as well as on adjacent lanes). Ideally, the interplay between these units and the cars driving on the road will then lead to traffic management which is at least as safe and efficient as the centralised on that is currently in use. 

    The purpose of this project is to model a scenario as the one above in a formal modelling language, such as Promela [2] or SMV [3]. The level of detail of modelling is up to consideration and to be decided while performing the project. We assume that it will be necessary to e.g. model the behaviour of each traffic control unit, physical properties of the cars in an abstract manner, assumptions of the behaviour of drivers, the number of cars entering the scenario, etc. The formal model obtained this way shall then be used to show the safety of the scenario, or to show which constraints have to be imposed to obtain safety using model checking.

    References:

    1. John Fitzgerald, Peter Gorm Larsen, Paul Mukherjee, Nico Plat, Marcel Verhoef: Validated Designs for Object-oriented Systems. Springer London. ISBN 978-1-85233-881-7
    2. Gerard J. Holzmann: The SPIN Model Checker - primer and reference manual. Addison-Wesley 2004, ISBN 978-0-321-22862-8, pp. I-XII, 1-596
    3. Alessandro Cimatti, Edmund M. Clarke, Enrico Giunchiglia, Fausto Giunchiglia, Marco Pistore, Marco Roveri, Roberto Sebastiani, Armando Tacchella: NuSMV 2: An OpenSource Tool for Symbolic Model Checking. CAV 2002: 359-364
  • Symbolic LTL synthesis (Tom van Dijk)

    Supervisor: Tom van Dijk

    In recent work, a first prototype was realized by one of our MSc students (Remco Abraham) of an almost fully symbolic pipeline from LTL specification to AIGER controller for LTL synthesis. However there is plenty of room for improvement, as indicated by the amount of suggested future work. Goal of the project is to look at the future work and consider the next steps.

  • Symbolic parity game algorithms (Tom van Dijk)

    Supervisor: Tom van Dijk

    We currently have a few fast parity game algorithms using BDDs. However these algorithms are quite vulnerable to particular parity games. Goal of the project is to study different symbolic versions of parity game algorithms, such as tangle learning, tangle attractors, fixpoints with justification, and possibly quasi-polynomial algorithms.

  • Which car should you buy? Analysis of product variations with SAT solving (at BetterBe.com) (Mariëlle Stoelinga, Tom van Dijk)

    Supervisor: Mariëlle Stoelinga, Tom van Dijk

    Contact: Mariëlle Stoelinga

    BetterBe is a company, based in Enschede, that provides webservices to automotive lease companies, in particular, to calculate the price of a particular lease car.
    Calculating this price is more complex than one may think, because of the large number of intricate dependencies between the various options. For instance: if you opt for a sports package, you cannot choose certain radios (reason: the leather steering wheel coat does not have the right holes for the radio buttons). There are hundreds of such dependency rules. Therefore, figuring out if a certain set of option is feasible, and determine its price, is non-trivial.
    SAT solving seems a natural technology to tackle this challenge: SAT is an advanced method to determine whether or not a logical formula is satisfiable or not. Advanced tool support, such as MathSat and Z3, is available. Existing SAT solvers, however, are good as solving large queries. What is needed here, is to solve a large number of small queries.
    Thus, the goal of this project is to (1) model the price calculations as a SAT solving problem, and (2) see if SAT solving techniques can be tailored to the specific case of running many small queries.

CONTACT

BACKGROUND

One branch of theoretical computer science is the analysis of algorithms, protocols, hardware circuits and software. We want to analyse a system such as a piece of software or a hardware circuit for desirable properties, or to automatically generate ("synthesize") systems with desirable properties. Examples of these properties are functional correctness, the absence of forbidden states or deadlocks, or reactivity properties such as that a system is responsive, or that a system can always restart.

To advance this goal, we study algorithms that analyse systems. For example, many properties for reactive systems are specified in the language LTL and ultimately LTL model checking and synthesis is translated (via algorithms) to a parity game. We then solve the parity game and the outcome answers questions such as:

  1. Does a system fulfill the specification? If not, what is a counterexample?
  2. Can a system, with an additional synthesized controller, fulfill the specification?
  3. How many possible (reachable) system states are there?

Concrete questions that can be solved with these methods are:

  1. Does a coffee machine always respond correctly when you ask for coffee?
  2. Do traffic lights ever enter a forbidden state, causing a collision?
  3. If a car stops in front of a traffic light, will the light turn green eventually?

The core challenge in this area is called state space explosion: in order to analyse a system, we usually need to consider exponentially or even double exponentially many different situations. This motivates a lot of research into new algorithms to reduce the number of different situations, often by inventing tricks to analyse similar states simultaneously, or by limiting a system to only "relevant" paths, and abstracting away from paths that don't seem relevant.

In the formal methods and tools group at the UT, we mostly work with binary decision diagrams and with parity games. However, we are also interested in studying abstraction techniques such as IC3, guided refinement, partial order reduction, and so forth. Another goal is to ensure high performance of algorithms, by smart engineering/coding. There are competitions between scientists and it would be nice if we can win them by using smart algorithms that are fast!

Prerequisites

Related Modules in the Bachelor TCS