Software Reliability

Software is everywhere! Therefore, our society heavily depends on the correctness and reliability of this software: we need techniques to show that software always behaves in the way that we expect, e.g. it will not crash, it computes the results that we expect, it does not have bugs and it does not violate the security or privacy of its users.

To guarantee the correctness and reliability of software, within the FMT group, we develop various verification techniques to reason about software.

First of all, we develop the VerCors program verifier, a tool set to verify concurrent and parallel programs: the user specifies the desired behaviour in terms of pre- and postconditions (and possible auxiliary annotations), and the tool then tries to automatically check whether the program respects the specification. The VerCors verifier can reason about (concurrent and parallel) programs written in Java, (a subset of) C, and OpenCL.

Second, we also develop the Refinement Framework within the Isabelle interactive theorem prover. Interactive theorem provers allow users to develop proofs (eg. that a program is correct) that are checked by the computer to rule out errors. The Isabelle Refinement Framework allows one to first verify an abstract version of a program, and then refine it, in several correctness preserving steps, to a concrete efficiently executable program. The Isabelle Refinement Framework has been used for various (sequential) algorithms, which eventually were refined into Standard ML or LLVM programs.

We offer student research projects related to program verification. These projects can be on different topics:

Prerequisites

Related Courses

Available Project Proposals

If you are interested in the general topic of Software Reliability, or if have your own project idea related to the topic, please contact us directly. Alternatively, you can also work on one of the following concrete project proposals:

  • Prove your (functional) Algorithm in Isabelle/HOL


    Supervisor: Peter Lammich
    Interactive Theorem Provers (ITPs) provide a way to develop computer checked (mathematical) proofs. In particular, they can be used to prove programs correct. In this project, you will learn to use an ITP, and apply it to prove correct a functional program or data structure of your choice (agreed on with the supervisor).

  • Verify an Imperative Algorithm in Isabelle-LLVM


    Supervisor: Peter Lammich
    Isabelle-LLVM is a verification framework for LLVM IR based on separation logic and the interactive theorem prover Isabelle HOL. In this project, you will verify some imperative (sequential) algorithm or data structure using Isabelle-LLVM. Prerequisite: Interactive Theorem Proving course. 

  • (Combinator) Parsing in Isabelle/HOL

    Implement parser (and pretty-printing) combinators in Isabelle, make them usable, and prove interesting lemmas (like pretty printed text is parsed back correctly).

    Prerequisite: Interactive Theorem Proving course. 

    Supervisor: Peter Lammich, Bram Kohlen

  • Case study: round-robin throttled blocking queue


    Supervisors: Marieke Huisman, Stefan Blom, Bob Rubbens
    BetterBe uses a throttled blocking queue to spread out work over time. This prevents overloading the external servers. Initial versions of this throttled blocking queue have shown problematic behaviour, which caused unfairness and starvation of processes. This has since been fixed. However, without formal analysis, we cannot be sure that the problematic behaviour is absent.
    VerCors is a deductive verifier developed at the FMT group. It can verify memory safety and functional correctness of Java, C, and GPGPU programs written in OpenCL. VerCors allows the user to specify pre- and post-conditions in the style of JML, extended with separation logic. This allows the user to annotate the code with read/write access patterns to data. VerCors can analyze these annotations and determine whether the code complies.
    For this assignment, the student will formally verify the correctness and fairness of the throttled blocking queue implementation with VerCors. To accomplish this task, several subtasks must be done:

    • Understand the throttled blocking queue implementation
    • Understand the bug and how it was fixed
    • Verify memory safety and functional correctness of the implementation with VerCors
    • Detect the bug with VerCors
    • Define in what sense the throttled blocking queue is fair
    • Formally verify fairness of the implementation using VerCors, VerCors models, and/or mCRL2.

      These subtasks are hard problems, and might require abstraction of the source code, or possibly modification or extension of the VerCors verifier. The throttled blocking queue is implemented in Java, and both versions of the source (before and after the problematic behaviour was fixed) are available for inspection. The project will partially be supervised by BetterBe staff.

      Supporting material
    • BetterBe homepage
    • VerCors homepage
    • VerCors GitHub repository
  • Formal Verification of Efficient Data Structures for Web Services to Automotive Lease Companies


    Supervisors: Stefan Blom, Marieke Huisman, Lukas Armborst
    BetterBe is a company, based in Enschede, that provides webservices to automotive lease companies, in particular, to calculate the price of a particular lease car.
    Calculating this price is more complex than one may think, because of the large number of intricate dependencies between the various options. For instance: if you opt for a sports package, you cannot choose certain radios (reason: the leather steering wheel coat does not have the right holes for the radio buttons). There are hundreds of such dependency rules. Therefore, figuring out if a certain set of option is feasible, and determine its price, is non-trivial.
    Two aspects of the price calculation are very important: speed and correctness.
    Speed of this price calculation is of utmost importance: every click in a car leasing website triggers a lease calculation, which must be fast, for users to have a pleasant user experience. But even more important than speed is correctness. Users may not lease cars when the calculated prices are too high, and leasing companies may make huge losses when the calculated prices are too low.
    To achieve speed of calculation, BetterBe builds its software using a number of efficient basic data structures, of which a red-black tree and a multi-hash-table are the most important. Correctness of the implementation of these basic data structures is thus very important, not only to obtain correct price calculations, but also to obtain them as fast as possible: performance may seriously suffer when red-black trees lose their balance.
    Formal verification using static analysis techniques is seen as possible solution that may give BetterBe rigorous proof of correctness, or clear indication of errors.
    In practice, this means (1) to formulate invariants to which the data structures have to comply, and (2) to show that the data structures do not invalidate these invariants. In an earlier MSc project and a paper, a standard red-black tree data structure has been proven (functionally) correct, as a first step in this project. The current project is about taking the next step, and adapting the standard data structure towards the specific data structure used at BetterBe. 

  • Verification of (parallel) model checking algorithms


    Supervisor: Marieke Huisman
    Within the FMT group, we are developing the VerCors tool set for the verification of concurrent and distributed software. To demonstrate the usability of our verification technique, we have recently used it to verify the correctness of a parallel nested depth first search algorithm, which can be used for reachability analysis in model checking.
    In this project, you are asked to investigate how to further improve the use the VerCors tool set for the verification of other (parallel) graph algorithms. Possible points for improvement are: 

    • developing a library of suitable, reusable graph properties,
    • automatically generating graph-related annotations for the algorithm,
    • identifying the best way to represent graphs,
    • refining the verification of a high-level algorithm into a verified, efficient implementation.

      During this project you will take the following steps:
    • understand program verification using VerCors
    • identify a suitable (parallel) model checking algorithm
    • formalise the desired correctness properties
    • prove correctness of the high-level algorithm w.r.t. the desired correctness properties using the VerCors tool set
    • investigate how to turn the verification into reusable results 
  • Automatic annotation generation for barriers on GPUs: From GPUVerify to VerCors


    Supervisors: Marieke Huisman, Ömer Şakar
    With increasing capabilities of GPUs, many sequential programs migrate into GPUs to gain more performance, but verifying the correctness of GPU programs is still challenging. The tool VerCors is developed in our group to verify concurrent programs. It accepts various input languages (Java, OpenMP, OpenCL, PVL), along with annotations, and checks properties like data race freedom and functional correctness of the program. In the case of GPU programs, these annotations are as pre/postconditions in the kernels and barriers. Barriers are mechanisms in GPU programs to synchronize the execution of threads in one block.
    There is a another tool, GPUVerify, developed at Imperial College London, to verify only data race freedom of GPU programs. The advantage of this tool, compared to VerCors, is that they can generate annotations automatically. Since they generate annotations for barriers, we would like to investigate how we can use it to generate annotations for barriers in VerCors.
    Tasks:

    • Understand how VerCors and GPUVerify work.
    • Understand how annotations are generated for barriers in GPUVerify.
    • Discover how VerCors can benefit from step 2 to generate annotations for barriers.
    • Implement step 3 in the VerCors toolset. 
  • Empirical Study on Design by Contract Teaching


    Supervisors: Marieke Huisman
    We propose to carry out an empirical study at Dutch schools to determine the effectiveness and limitations of our approach to teach Design by Contract and Software Verification to the students from the last years of secondary school.
    We have the description of our approach and a preliminary plan supported by an extension to the Snap! language [1,2,3] that can be found at [4]. The approach should be reviewed and the teaching plan should be elaborated. Also, the experiment has to be set up and the guidelines for its result analysis have to be established. We need to search for suitable institutions where to carry out the study. Finally we need to carry out the study, finding a way to overcome the limitations of the pandemic situation, if any remain by the time the research is realised.
    References:
    [1] https://snap.berkeley.edu/
    [2] https://snap.berkeley.edu/snap/snap.html
    [3] https://bjc.edc.org/
    [4] https://git.snt.utwente.nl/montire/verifiedsnap 

  • Verification of concurrent tree bitmaps


    Supervisors: Marieke Huisman, Lukas Armborst
    This project is in cooperation with the company NLnet Labs. At NLnet Labs we build software tools to analyse the state of the Internet routing infrastructure in near real-time. We know the position and destination of airplanes in the sky at any time, but which routes our Internet traffic takes is a very challenging problem that we want to tackle. To this end we are researching and building an engine that at its core has a data-structure that will be able to store and retrieve this data (centred around IPv4/v6 prefixes). The most important properties of the desired solution are that the data structure is time and space efficient, which traditionally is always a trade-off between the two.
    Currently we have implemented this data-structure as a treebitmap, you can read more about the considerations that led to this choice here. An explanatory implementation of this data-structure, as well as some implementations using other underlying data-structures, can be found here.
    We are now considering a production implementation that can be used in multi-threaded environments, with multiple writers and readers using a single data-structure concurrently. We are comparing two main directions to approach this problem:

    • A single writer on a single thread with a queuing mechanism and single-threaded readers (one thread per reader) concurrently accessing the data-structure. In this case there's only a need for concurrent read access. This would be the baseline implementation to compare other implementations to, in order to gauge the costs versus the benefits of other solutions in terms of performance and safety guarantees.
    • Multiple Writers and multiple readers (MPMC). This will need both concurrent write and read access.
    • For concurrent access we will compare both lock and lock-free data-structures, although we are biased towards lock-free ones (using atomics), since these are not prone to dead-locks and should perform better under contention than lock-based data-structures. Ideally speaking we would like to implement a wait-free data-structure, but that might prove to be over-engineering.
      Note that we may also end up with a mixture of both single-writer and multiple-writer data-structures, due to the specific properties of BGP data streams. BGP data streams have episodes of bursts of concurrent writes (the convergence period on startup or after `route-refresh` commands) and episodes of low-intensity writes from many sources and medium-intensity reads from many sources. The result of mixing both solution directions may very well mean that we will have to come up with a tree-merging strategy.
      Another issue that will need to be considered, is memory reclamation. We don't think the removal of IP prefixes (and thus removal of nodes in our tree) will play a big role and we will be able to get away with stop-the-world tree pruning at strategic moments in time (comparable to the way Postgresql does this). If it turns out this would not be a good approach after all, we need to consider strategies that handle the ABA problem, like epoch-based memory reclamation.
      Project: Verification could hopefully make a comparison between the baseline implementation and the lock-free MPC one, and give us guarantees about our implementation indeed being dead-lock free, and maybe even starvation-free, as well as guarantees around avoiding the ABA problem.
  • Privacy-preserving machine learning: proven algorithms

    Supervisors: Peter Lammich, Milan Lopuhaä-Zwakenberg

    Differential Privacy (DP) is a metric to measure the privacy leakage of an algorithm handling sensitive data. Because of its strong privacy guarantees it is widely used in both industry and academia. An important application is privacy-preserving classifier training, where a data classifier must be created based on training data, without leaking too much information about the training data itself. Multiple privacy-preserving classifiers have been proposed; however, because differential privacy is hard to verify experimentally, many proposed classifiers have more privacy leakage than is desirable.

    The aim of this project is to create algorithms whose privacy-preserving properties have been confirmed by the interactive theorem prover Isabelle. The first step is to formalize DP into Isabelle; then, the student can verify whether existing algorithms indeed satisfy DP, and study how they must be adapted otherwise.

  • Privacy-preserving machine learning: verified algorithms

    Supervisors: Petra van den Bos, Milan Lopuhaä-Zwakenberg

    Differential Privacy (DP) is a metric to measure the privacy leakage of an algorithm handling sensitive data. Because of its strong privacy guarantees it is widely used in both industry and academia. An important application is private classifier training, where a data classifier must be created based on training data, without leaking too much information about the training data itself. In this project, the student investigates the formal verification of DP algorithms. More concretely:

    • Implement DP classifiers into formal verification tools to verify their privacy;
    • Study the difference, in implementation and verification guarantees, of static vs dynamic verification;
    • See whether these tools have the functionality to verify a range of DP classifiers.
  • Verifying correctness of VerCors rewrite rules

    Supervisors: Marieke Huisman, Lukas Armborst, Bob Rubbens
    The verification tool VerCors, developed here in the FMT group, transforms the annotated input program into something that the backend tools can check. As part of this transformation, some expressions are rewritten into equivalent ones. However, this equivalence is simply assumed, and not formally proven. This project aims to prove that equivalence, thus closing this gap in the formal proof chain.
    We propose to use the Coq interactive theorem prover for this, which has support for separation logic via the Iris plugin. So a significant part of this project is to determine how VerCors' rewrite rules should be expressed in Iris. The result should be a reusable procedure that can also be applied to new rewrite rules in the future, to facilitate a (semi-)automatic proof for them.
    The following steps would be necessary:

    • understand the VerCors rewrite rules
    • get familiar with Coq and Iris
    • translate the expressions from the VerCors rewrite rules to Iris expressions
    • prove those Iris expressions
    • develop an automatic procedure to translate, and ideally also prove, new rewrite rules

    Familiarity with interactive theorem proving is highly beneficial.

  • Scalable Verification of Embedded Systems

    Supervisors: Marieke Huisman, Philip Tasche
    Embedded systems are used in a wide variety of appliances. This includes highly safety-critical systems, such as the control software of cars or planes. Since a fault in such a system can have serious consequences, it is vital to rigorously ensure their correct behaviour. However, establishing correctness of embedded systems is hard. Formal verification is a powerful tool to establish safe behaviour for all possible inputs. However, the most common approaches for this are based on model checking, which limits their scalability to industrial designs.
    In the SAVES project, we are investigating how to use deductive verification techniques to alleviate the scalability problem. To achieve this, we use the VerCors deductive verifier, which is being developed at the University of Twente, to verify SystemC designs. Challenges we have to deal with include the fact that deductive verification works only locally. To reason about global properties, we define global invariants that relate the global state to the local state, but there are also other options to try. A paper describing the details of our approach is available upon request.
    In this Master project, you are asked to find ways to improve this approach. You will become familiar with the VerCors verifier as well as the SystemC modelling language. Possible tasks for the project include:

    • investigating different options to extend VerCors' local reasoning to global properties,
    • implementing a tool to automatically generate a global invariant from source code,
    • modelling embedded systems for integration into the VerCors input language,
    • experimentally evaluating the scalability of different approaches on case studies.

    Resources:

  • Software failure rates

    Supervisor: Matthias Volk, Mariëlle Stoelinga

    Fault trees are an important formalism to assess the reliability of systems. Typically, they have been very successfully applied to complex hardware systems, like railroads or chemical plants. However, software is getting more and more important. Hence, software fault trees have been developed to account for software failures as well. One important research question is how to estimate the failure rate of software components.

    Research directions:

    The key idea of this project is to find good estimations for failure rates of software. Possible directions are:

    • Develop a Bayesian network capturing the dependencies between different influences on the software reliability (used programming language, experience of the team, test coverage, size of project, etc.)
    • Integrating existing tools (e.g., SonarQube) for better estimation of software failure rates.
  • Deductive verification of the JavaBIP engine

    Supervisors: Marieke Huisman and Simon Bliudze (INRIA)

    The main goal of this project is to prove the correctness of one or several encoders composing the JavaBIP coordination engine, using the VerCors deductive verifier.

    For more details, see the full project description below.


  • Deductive Verification of Concurrent Embedded Systems with VerCors and Frama-C

    (Supervisor: Marieke Huisman, Philip Tasche)
    Deductive verifiers - tools that can prove a program's correctness by means of logical deduction - differ in capabilities and limitations based on their underlying logics. What one verifier can prove without trouble, another might only be able to prove with a long verification time, by using a manual workaround, or not at all. The choice of verification tool is, therefore, an important question for any formal program verification approach.

    In the SAVES project, we are working on deductively verifying embedded systems written in the hardware/software co-design language SystemC. To this end, we have developed VeSUV, a verification tool based on the VerCors concurrency verifier, as well as general techniques for SystemC verification. At the same time, we have also developed, but not fully implemented, a verification tool based on the sequential C verifier Frama-C. Now, we are interested in seeing what the advantages and disadvantages of each approach are and which is best suited for our purposes.

    Your goal in this project is to finish the implementation of the Frama-C-based verification, evaluate the difficulty of verifying properties with both, and to conclude which is better suited for our purposes. You will become familiar with SystemC, Frama-C and VerCors.

    Prior experience with deductive verification, for example the System Validation or Program Verification courses, is not required, but would be very helpful.

    The following steps are necessary for this project:

    • understand deductive verification techniques for embedded systems
    • familiarize yourself with SystemC, Frama-C and VerCors based on the provided materials
    • familiarize yourself with the current state of our SystemC-to-C transformation for verification with Frama-C
    • finish the implementation of the SystemC-to-C transformation
    • evaluate strengths and weaknesses of both approaches on case studies

    Some supporting resources:

    • VerCors documentation
    • Frama-C documentation
    • A paper on VeSUV is available upon request
    • A Bachelor thesis on SystemC verification with Frama-C is available upon request
  • VerifyThis! memcached Verification Case Study

    Supervisor: Lukas Armborst, Marieke Huisman
    At the VerifyThis Long Term Challenge, different researchers in the field of software verification all work on verifying the same project, or parts of it. In 2023, the second challenge was issued: Verifying memcached, a key-value-store that is accessible via a network protocol. The original C project is open-source and widely used. However, it is rather complex (e.g. including its own thread library), so the organisers of the challenge also provided some simplified reference implementations in Python and Java. It would be interesting to verify (parts of) this project.

    At the FMT group, we develop the deductive verifier VerCors. It can check programs in several languages, such as Java, against user-provided specifications, such as method pre- and postconditions. VerCors focuses on concurrent and distributed software, for example checking for race conditions.

    Your task is to verify (parts of) memcached with VerCors, i.e. provide the necessary specifications. There are many different aspects one can investigate, for instance the concurrent access of worker threads on the underlying memory and cache. In particular, your tasks are:

    • familiarise yourself with (the usage of) VerCors
    • familiarise yourself with the memcached project/challenge
    • decide on a suitable subproblem to verify, ideally related to concurrency
    • write the necessary specifications
    • help to improve VerCors and memcached, if needed.

    The collaborative nature of the Long Term Challenge will give you the opportunity to exchange your findings with other verification researchers.

    Resources
    https://verifythis.github.io/03memcached/
    https://github.com/memcached/memcached
    https://github.com/wadoon/bloatcache

  • Translate Specifications for Deductive Verifiers

    Supervisor: Lukas Armborst, Marieke Huisman
    Deductive verifiers can be used to prove correctness of a program. To determine whether a program is correct, the user needs to give a program and its specification to the verifier. These specifications are typically written in a specification language, often inspired by predicate logic. However, while very similar, specification languages for different verifiers often have slightly different semantics, due to which it is difficult to reuse specifications from other verifiers.

    We have developed the Specification Translator, a tool to translate specifications between several deductive verifiers for Java programs. It currently supports Krakatoa, OpenJML and VerCors specifications. As a result, we are now able to reuse specifications between these three verifiers.

    In this project, we ask you to add support for another deductive verifier to the Specification Translator. There are various interesting candidates for that. You will have to study the differences in semantics of the specification languages, and see how support for the new specification language can be added to the Specification Translator.

    You will be expected to take the following steps:

    • Investigate candidates for extending the Specification Translator and decide on one
    • Understand the specification language of the new tool
    • Identify differences between the semantics of the new specification language and the languages currently supported
    • Implement support for the new specification language in the Specification Translator
  • Extending SYCL Support in VerCors

    Supervisors: Marieke Huisman, Ömer Şakar

    SYCL is a high-level programming language specification that enables the use of different heterogeneous devices in a single application. It is a language built in C++ and targets different devices such as CPUs, GPUs, and FPGAs.

    In a previous Master's project [2], Ellen Wittingen worked on the deductive verification of SYCL programs. She built prototype verification support for a representative subset of SYCL in VerCors, focussing in particular on its basic/nd-range kernels and buffers/accessors.

    However, there is still a list of possible future work to explore. Among the list are adding support for SYCL's group barriers and memory fences (with different memory orders) and the support for USM (Unified Shared Memory) which abstracts away different kinds/layers of host/device memory.

    This project then consists roughly of the following steps:

    • Familiarise yourself with SYCL & VerCors
    • Investigate the current state of SYCL support in VerCors
    • Identify which SYCL constructs are interesting to support. 
    • Build support for those constructs/concepts in VerCors.
    • Evaluate your work with SYCL examples.

    [1] https://www.khronos.org/sycl/
    [2] https://purl.utwente.nl/essays/97976

  • CUDA/OpenCL support for Alpinist.

    Supervisors: Marieke Huisman, Ömer Şakar

    A part of the general GPU program development cycle is to start with a (semi-)naive program and incrementally optimize it. However, this process is manual and error-prone. To make this process less error-prone, we can annotate the programs with pre-/poststyle contracts and verify them with a deductive verifier such as VerCors [1]. However, this would imply that at each optimization step, the annotations might have to change as well according to the applied optimization.

    This is where Alpinist steps in. Alpinist is an annotation-aware GPU program optimizer [2]. Alpinist applies a user-specified optimization on an (annotated) verified GPU program. It produces an optimized program where the annotations have changed according to the applied optimization, thereby preserving the provability of the program.

    Alpinist currently only works on PVL, the Prototypal Verification Language of VerCors. However, GPU programs are written in languages such as CUDA and OpenCL.

    This project proposes to figure out how the optimizations currently supported in Alpinist can also be used on CUDA/OpenCL examples. 

    This project then consists roughly of the following steps:

    • Familiarise yourself with Alpinist & VerCors
    • Investigate the current implementation of the optimizations supported by Alpinist.
    • Figure out how the current implementation can be generalized to also work on CUDA/OpenCL programs.
    • Implement the idea
    • Evaluate the idea with real-life examples.

    [1] https://vercors.ewi.utwente.nl/
    [2] https://research.utwente.nl/en/publications/alpinist-an-annotation-aware-gpu-program-optimizer

  • Is your network reliable?

    Supervisor: Georgiana Caltais

    Are you a graduate student eager to tackle real-world problems in networking and data processing? Join us for an exciting Master's thesis project focusing on Software Defined Networking (SDN) and big data applications!

    SDN-based technologies, embraced by industry leaders like Google and Intel IT, can play an important role in solving issues concerning data processing in cloud data centers, optimisations and data delivery. SDN is an emerging approach to network programming, in a setting where the network control is decoupled from the forwarding functions. This makes the network control directly programmable and more flexible to change.

    In this project, you'll have the chance to advance formal approaches to SDN-based technologies, helping to solve critical issues concerning SDNs’ reliability. Whether you're into theory or hands-on work, there's a project suited for you:

    • If you prefer hands-on experimentation, roll up your sleeves and delve into devising algorithms for extracting and analysing SDNs based on real datasets, while gaining valuable real-world experience along the way.
    • For the theory enthusiasts, explore how mathematical frameworks can drive safety and robustness in SDNs. Furthermore, work on liability frameworks for SDNs, to answer questions such as: “What caused my network to fail?” or “Who is responsible for that packet loss?”.
    • Bring your ideas and expertise to the table as we tackle some of the most pressing challenges in networking and data processing.

    Your Master's thesis could be the key to unlocking groundbreaking advancements in SDN and big data technologies!

  • Liablity in distributed systems

    Supervisor: Georgiana Caltais

    Join us for an exciting Master's thesis project where you'll be a digital detective, uncovering the mysteries behind computer system mishaps and making them right!

    Imagine a train system where accidents happen more often during heavy rain. But is rain really the culprit? Let's find out! With causal inference, we'll dive deep into the data to see if it's rain, visibility, or slippery tracks causing the chaos. By using this method, we can see if solutions like better drainage systems truly prevent accidents.

    Originally used by philosophers, causal inference is now changing the game in Computer Science. Take a computerized railway crossing with faulty signals – it's a recipe for disaster. But with formal models and rigorous analysis techniques supported by causal inference, we can make operations safer and find smarter solutions.

    Your mission? Develop and implement causal models for liability in distributed systems – systems with multiple parts working together. These models will help pinpoint what's causing errors, serving as a guide for developers to fix issues and make systems safer.

    Join us and contribute to a safer world by understanding and improving complex systems!

Contact