UTFacultiesEEMCSDisciplines & departmentsFormal Methods and ToolsResearchProjectsDevelopment of methods and tools for AI safety verification

Development of methods and tools for AI safety verification

Funded by: Audi AG Ingolstadt
Duration: 01. July 2021 until 30. June 2024

Researchers

Project summary

The upcoming generation of automotive human machine interface (HMI) systems will heavily depend upon artificial intelligence. This comprises technologies such as autonomous driving, speech assistance, gesture & touch-enabled interfaces, web & mobile integration. Effective, safe, and user-friendly interaction between driver and vehicle is a key aspect here. This PhD project will improve on the state-of-the-art of HMI. We are interested in various perspectives on this matter, including safety and security assurance of AI functions. To account for the latter, we will investigate development of efficient and scalable methods and tools to address efficient testing, robustness evaluation and runtime monitoring of AI functions. This paves the way for AI-based driver-vehicle state models to achieve the necessary quality and robustness for multilevel automated vehicles and to enable situation-adaptive human- machine interactions (HMIs).

Due to the novelty of these approaches, verification algorithms for such systems have hardly been researched yet. There are few tools that already perform this task, but they cannot be used due to scalability issues and high runtime.

In order to bring autonomous vehicles to market in the future that are performant enough to reach customers, ML systems must be used. However, their use in such a critical area must have been verified from the technical side.

The aim of the project is to proceed towards a safe and reliable use of machine-learnt components, in particular deep neural networks (DNN). It focuses on providing assurance on correct classification and approximately correct regression by DNN. This encompasses both analysis of DNN before its use by both testing, robustness evaluation as well as enforcing safety during runtime (monitoring with alarm raising), each detailed below in a separate work- package. These research directions form an essential part of efforts playing the major role in order to enable safe use of artificial intelligence in human machine interaction systems, as envisioned in the KARLI project in which this research is embedded in.

The research is to be carried out by the PhD candidate Akshay Dhonthi with Marieke Huisman as promotor and Moritz Hahn as daily supervisor.