Research

Formal Methods and Tools - contributing to the design of reliable software- and data-intensive control systems at the UT.

Formal methods are techniques for the mathematically sound construction and analysis of software- and data-intensive control systems. The central goal of the FMT group is to increase the reliability of the software and systems that we count on, as individuals and as society. We primarily target complex concurrent ICT systems, embedded in a technological context or in a distributed environment.

We address several important scientific challenges. The first challenge is how we can provide guarantees about a wide and diverse range of system properties, such as safety, functional correctness, performance and energy usage, and the risks and costs associated to the system. A second challenge is to justify how these guarantees can be provided. We study a broad suite of technologies to guarantee or demonstrate reliability, performance, energy consumption etc., ranging from theorem proving, model checking, quantitative evaluation, to systematic testing, predication, synthesis and optimization. The third challenge is to handle the ever-increasing complexity of software systems. To this end we develop new theory on concurrency, automata, graphs, logic, languages, and algorithms to scale automated analysis and synthesis methods. Moreover, we also investigate techniques in empirical software engineering that can address this challenge.

We distinguish several focus areas:

  1. Quantitative modelling and analysis for cyber-physical and socio-technical systems. Many key system requirements - such as availability, energy consumption, risk, and cost - are quantitative in nature. FMT works on analyzing these quantitative properties, addressing both their theoretical foundations and practical application, in particular via stochastic and timed model checking techniques. In industrial contexts, we translate domain-specific models into rigorous mathematical representations, supporting their quantitative analysis. To manage complexity, we develop a large variety of techniques - such as modularity, abstraction, and rewriting - that enhance scalability and flexibility. We also actively contribute to establishing the foundations of these quantitative methods, developing new algorithms and techniques that improve their precision, scalability, and expressiveness. We also ensure the correctness of these algorithms through theorem proving; advance synthesis methods for deriving solutions that balance and trade off multiple quantitative objectives; and incorporate advanced AI-based techniques - including model learning and safe reinforcement learning. Together, these efforts support decision-making in various domains, including energy optimization, trade-offs between safety and security risks, and predictive maintenance.
  2. Design, testing and verification of (concurrent) software.
    Software is what make ICT systems work. The goal of FMT is to develop techniques that can help to provide guarantees about the correctness of software with respect to its specification. We use different techniques for providing these guarantees, such as systematic testing, interactive theorem proving and deductive verification based on program logics. We look at the problem from different angles: in some cases we start with a high-level algorithm or model, and refine this into an efficient and correct implementation, in other cases we take an existing implementation and provide guarantees about it by either adding function level specifications that describe the intended behaviour, or by comparing its behaviour to an abstract model. We are in particular interested in the behaviour of concurrent and distributed software, where manual analysis quickly becomes unfeasible. An important aspect of our work is to make the analysis process is automated as possible, and to ensure that it seamlessly integrates into the development cycle. Together, these activities contribute to the certification of safety critical applications, for instance in embedded software, and to the increased security of critical infrastructure and the Internet of Things.
  3. High-performance methods, algorithms, and data structures for model checking and model transformation. The performance of tools is crucial for handling large and complex models. Modern hardware improvements, such as the now ubiquitous multi-core and distributed systems, offer great opportunities for performance boosts but increasingly rely on parallelism; hence we study how to design algorithms and data structures that scale effectively in such environments. Our work includes lock-free and wait-free techniques, efficient implementations of binary decision diagrams, scalable load-balancing queues, and other performance-critical components. These advances support the practical use of verification and analysis tools in demanding application domains.
  4. Empirical study and improvement of software development practice. We aim to understand how developers engineer software in practical scenarios: the tools, methods, and techniques they use, how they use them, and the problems they face. We conduct empirical studies examining artefacts produced in the development of open-source software systems, as well as industrial ones, and the behaviour of the systems themselves. The understanding we gain from these empirical studies fuels the creation of new solutions aiming to improve both the process and the products of software engineering. These solutions tackle problems related to the development of software systems, e.g., making software easier to write, understand and evolve, as well as to the runtime behavior of these systems, understanding and improving their execution time and energy efficiency.
  5. New technologies to improve programming education. Though the core of good education is teacher-student and student-student interaction, this can be supported by tools that help the teacher in many of their tasks, ranging from forming teams under certain well-defined constraints to automating the more routine parts of assessment. However, it is important that such innovations are actually based on evidence that they improve the practice education. We therefore engage in theoretical research as well as empirical studies, in collaboration with the social science department, to establish a firm foundation for technology-based educational innovations.

A common approach in software technology, connecting many of our research projects, is the use of domain specific models, model transformation, and abstractions. Our approach links models for various system aspects at several abstraction levels. Another unifying aspect is formed by the software tools that we construct, to benchmark our algorithms in competitions and challenges, and to validate new methods on realistic case studies.

Our results are disseminated both via publications in journals and peer-reviewed conferences, and via open-source tools & artefacts.