Omega-regular languages provide a rich formalism to unambiguously express safety and progress requirements of stochastic models such as Markov decision processes (MDPs). The most common way to describe an ω-regular language is via a formula in Linear Time Logic (LTL). A typical requirement that is naturally expressed as an ω-regular objective prescribes that an agent should eventually control the MDP to stay within a given set of states, while at all times avoiding another set of states. In LTL, this would be written "(FG goal) ∧ (G ¬trap)", where "goal" and "trap" are labels attached to the appropriate states, "F" stands for "finally," and "G" stands for "globally."

The usual way of analysing ω-regular properties of a given MDP is by first transforming an LTL formula into a nondeterministic Büchi automaton. This automaton is then further transformed into a (semi-)deterministic automaton. The resulting automaton can be composed with the MDP into a product model. Afterwards, further solution algorithms are applied on the product. The second transformation - from nondeterministic to deterministic automata - usually involves a considerable increase in the size of the automaton, in turn sometimes making analysis of the product model infeasible.

This talk discusses how one can directly compose the nondeterministic Büchi automaton resulting from the LTL formula with the MDP. To do so, one still has to construct a semi-deterministic automaton from the original automaton. Then, one tries to prove that the original automaton is able to simulate the larger automaton, which basically means that the original automaton is able to follow each step of the resulting semi-deterministic automaton. If simulation is possible, it is correct to combine the original automaton directly with the MDP under consideration. Doing so has the potential to lead to a much smaller product, thus enabling analysis of larger MDPs.