Wytse Oortwijn is a PhD student in the research group Formal Methods and Tools. His supervisor is prof.dr. M. Huisman from the faculty of Electrical Engineering, Mathematics and Computer Science.
Software has integrated deeply into modern society, not only for small conveniences and entertainment, but also for safety-critical tasks. As we increasingly depend on software in our everyday lives, it becomes increasingly important that these software systems are reliable, as well as correct with respect to their intended behaviour. However, providing any guarantees on their reliability and correctness is very challenging, as software is developed by humans, who by nature make mistakes. This challenge is further complicated by the increasing demand for parallelism and concurrency, to match the developments in processing hardware. Concurrency makes software even more error-prone, as the concurrent interactions between different subsystems typically constitute far too many behaviours for programmers to comprehend. Software developers therefore need formal techniques that aid them to understand all possible system behaviours, to ensure their reliability and correctness.
This thesis contributes towards such formal techniques, and focusses in particular on deductive verification: a software verification approach based on mathematical logic. In deductive verification, the intended behaviour of software is specified in a program logic, allowing the use of (semi-)automated tools to verify whether the code implementation adheres to this specified behaviour, in every possible scenario.
More specifically, the work in this thesis builds on Concurrent Separation Logic (CSL), a program logic that specialises in reasoning about concurrent programs, targeting properties of functional correctness and safe memory usage. In recent years, there has been tremendous progress on both the theory and practice of CSL-based program verification. Nevertheless, many open challenges remain. This thesis focusses on one such challenge in particular, namely on how to verify global functional properties of real-world concurrent software, in a sound and practical manner.
This thesis consists of three parts, each of which addresses the above challenge from a slightly different perspective. In Part I, we investigate how CSL can be used to mechanically verify the correctness of parallel model checking algorithms. Model checking is an alternative approach for verifying software, which relies on exhaustively searching through all possible system behaviours, to check whether they satisfy a given temporal specification. The underlying search procedures are typically algorithmic, and are often parallelised for performance reasons. However, to avoid a false sense of safety, it is essential that these highly-optimised search algorithms are correct themselves. We contribute the first mechanical verification of a parallel graph-based model checking algorithm, called Nested Depth-First Search (NDFS). This verification has been performed using VerCors: an automated verifier that uses CSL as its logical foundation. We also demonstrate how our mechanised proof of correctness supports the easy verification of various optimisations of parallel NDFS.
Part II of this thesis contributes a practical abstraction technique for verifying global behavioural properties of shared-memory concurrent software. Our technique builds on the insight that concurrent program behaviour cannot easily be specified on the level of source code. This is because realistic programming languages have only very little algebraic behaviour, due to their advanced language constructs. Instead, our approach allows to specify their behaviour as a mathematical model, with an elegant algebraic structure. More specifically, we use process algebra as the modelling language, where the actions are abstractions of shared-memory updates in the program. Furthermore, we extend CSL with logical primitives that allow to prove that a program refines its process-algebraic model. These refinement proofs solve the typical abstraction problem: establishing whether the model is a sound abstraction of the modelled program. This abstraction approach is proven sound with help of the Coq proof assistent, and is implemented in the VerCors verifier. We demonstrate our approach on various examples, including a classical leader election protocol, as well as a real-world case study from industry, concerning safety-critical software.
In Part III we lift our abstraction technique to the distributed case, by adapting it to verify message passing concurrency. This adaption uses process-algebraic models to abstract communication behaviour of distributed agents. Moreover, we investigate how the refinement proofs allow deductive verification to be combined with model checking, by analysing program abstractions using a model checker, viz. mCRL2, to reason indirectly about the program's message passing behaviour. This combination builds on the insight that deductive verification and model checking are complementary techniques: the former specialises in verifying data-oriented properties, while the latter targets temporal properties of control-flow. Such a combined verification approach is therefore a natural fit for reasoning about distributed systems, as these generally deal with both computation (data) and communication (control-flow). Our approach is compositional, is mechanically proven sound with help of the Coq proof assistent, and is implemented as an encoding in the Viper concurrency verifier.
Altogether, this thesis makes a major step forward towards the practical and reliable verification of global behavioural properties of real-world concurrent and distributed software. The techniques proposed in this thesis are: reliable, by having mechanically proven correctness results in Coq; are expressive, as they are modular, compositional, and build on mathematically elegant structures; and are practical, by being implemented in automated concurrency verifiers.