Algorithms and Datastructures for Efficient Analysis of Systems

For background information on the topic as a whole, scroll to the end of this page.

Available Project Proposals

If you are interested in the general topic of Systems Analysis, or if have your own project idea related to the topic, please contact us directly. Alternatively, you can also work on one of the following concrete project proposals:

  • Applying dynamic variable ordering to fault tree analysis (Tom van Dijk, Milan Lopuhaä-Zwakenberg, Mariëlle Stoelinga)

    Supervisor: Tom van Dijk, Milan Lopuhaä - Zwakenberg, Mariëlle Stoelinga

    One approach for fault tree analysis uses binary decision diagrams to represent the fault trees. Recently we implemented dynamic variable ordering in the Sylvan binary decision diagram library, which is highly parallel (using multiple cores on the computer). We want to see how well this performs on BDDs obtained from fault tree analysis. 

  • High-performance parallel load balancing (Tom van Dijk)

    We are interested to do execute parallel loads with many small tasks, this is called fine-grained parallelism. The task-based approach is also called fork-join parallelism: each task spawns a number of other tasks and waits for their result in order to continue.

    Lace has very good performance in C. Notably, the "fibonacci" benchmark, where each task is just summing the results of two subtasks, is very sensitive to overhead from for example memory accesses related to the load balancing. Lace has almost no overhead, especially compared to many other approaches.

    Potential research tasks:

    • We have a version of Lace ported to Rust. We are now interested in further testing the performance using new benchmarks, and potentially refining the implementation in Rust.
    • Port Lace to C++. Modern C++ has a number of interesting language features. Perhaps it is possible to integrate the high performance queue datastructure in Lace to a C++ framework?
    • Adapt the TP-PARSEC task parallelism benchmarks to work with Lace (in C), so that we can a) compare Lace systematically to other approaches supported by TP-PARSEC, b) compare Lace systematically to other versions of Lace, to see how proposed changes affect realistic workloads.
  • Flexible strategies for state space exploration (Arend Rensink)

    Supervisor: Arend Rensink

    Given a transition-based model for the behaviour of a system, one can explore it by depth-first or breadth-first search, but these are by no means the only possibilities. In fact, there are many options and parameters, such as heuristics, stopping criteria and bounds.

    The problem addressed by this project is to develop a taxonomy of exploration strategies and a framework in which they can be specified and executed in a flexible way. If successful, this can be integrated in the GROOVE tool for graph transformation.

  • Incremental parity game solving (Tom van Dijk)

    Supervisor: Tom van Dijk

    We can solve full parity games using Oink (https://github.com/trolando/oink) but typically it takes way more time to generate the full game than it takes to come up with the solution. We want to see if we can solve parity games partially, during generation. Ideally, intermediate outcomes can be reused in later attempts. This study is about figuring out how to do this with a variety of algorithms and determine which approach is most useful in practice.

    In practice, this probably means integrating Strix with Oink somehow. I have contact information of a Strix developer.

  • Small parity game strategies (Tom van Dijk)

    When solving parity games for reactive synthesis, we are interested in winning strategies that are small. Different algorithms sometimes give different results, leading to the idea that perhaps there is a way to design an algorithm that is more likely to yield a small winning strategy. That is what this topic is about. We have a few open ideas: a) only consider vertices of a certain priority; b) when finding a winning region or a tangle, try to recursively find a smaller winning region or tangle; c) study a specific parity game for which we know there are different possibilities, in order to learn more about why some algorithms result in nonefficient strategies.

  • Symbolic parity game algorithms (Tom van Dijk)

    Supervisor: Tom van Dijk

    We can solve parity game using algorithms that use BDDs, in earlier years we did this with fixpoint algorithms already. However these algorithms are quite vulnerable to particular parity games. Goal of the project is to study different symbolic versions of parity game algorithms, that are more robust, such as tangle learning, tangle attractors, fixpoints with justification, and possibly quasi-polynomial algorithms.

    It is relatively straight forward to start with justification fixpoints and see how far we can go. We have "fixpoints with freezing", and I think the next targets would be "fixpoints with justification" and "priority promotion".

Contact

Background

One branch of theoretical computer science is the analysis of algorithms, protocols, hardware circuits and software. We want to analyse a system such as a piece of software or a hardware circuit for desirable properties, or to automatically generate systems with desirable properties. Examples of properties are functional correctness, the absence of forbidden states or deadlocks, or reactivity properties such as that a system is responsive.

Many of these questions just boil down to running algorithms on graphs. For example, to check whether some system fulfills a reactivity property specified in LTL, we would simply construct an automaton such that if it has an accepting run, this accepting run proves that the system is not correct. That is, if the language of the automaton is empty, then the system is correct.

Of course, this is only the beginning because if this was all there is to it, then there would not be much research to do. The core challenge in this area is called state space explosion: in order to analyse a system, we usually need to consider exponentially or even double exponentially many different situations. This motivates a lot of research into new algorithms to reduce the number of different situations, often by inventing tricks to analyse similar states simultaneously, or by limiting a system to only "relevant" runs, and abstracting away from runs that don't seem relevant. We are constantly trying to find new tricks to push the limits of what we can compute efficiently.

In the formal methods and tools group at the UT, we work with many techniques including binary decision diagrams and parity games. However, we are also interested in studying abstraction techniques such as IC3, guided refinement, partial order reduction, and so forth. Our other goal is to ensure high performance of algorithms, by smart engineering/coding. There are competitions between scientists and it would be nice if we can win them by using smart algorithms that are fast!

Other topics

Also open ideas for high performance algorithms / data structures are welcome. We have a focus on efficient implementations and empirical evaluation of algorithms.

Prerequisites

Related Modules in the Bachelor TCS