UTFacultiesETEventsFULLY DIGITAL - NO PUBLIC : PhD Defence Francesco Walker | To trust or not to trust? - Assessment and calibration of driver trust in automated vehicles

FULLY DIGITAL - NO PUBLIC : PhD Defence Francesco Walker | To trust or not to trust? - Assessment and calibration of driver trust in automated vehicles

To trust or not to trust? - Assessment and calibration of driver trust in automated vehicles

Due to the COVID-19 crisis measures the PhD defence of Franceso Walker will take place online.

The PhD defence can be followed by a live stream.

Franceso Walker is a PhD student in the research group Transport Engineering and Management (TEM). His supervisors are prof.dr. M.H. Martens from the Faculty of Engineering Technology (ET) and prof.dr.ing. W.B. Verwey from the Faculty of Behavioural, Management and Social Sciences (BMS).

Automated vehicles promise to improve road safety and increase travelling comfort. Yet, while development efforts have focused on improving the technology, recent accidents have raised awareness of the challenges posed by the way humans interact with such systems. Many of these challenges present a common denominator: user trust. While a lack of trust (under-trust) may induce drivers not to use the automated vehicle’s functionalities, excessive trust (over-trust) can lead to dangerous outcomes, with drivers using the system in situations it cannot cope with. Successful and safe interaction between humans and automated driving systems requires trust calibration: the process of continuously aligning driver trust to the reliability of the automated driving system. Trust calibration is the central theme of this dissertation.

To date, there have been relatively few studies focussing on the way driver trust varies from situation to situation, and the way in which experience in a range of situations may lead to more appropriate levels of trust. Studies of driver trust also face a number of methodological issues including a lack of on-road investigations, concerns regarding the validity of simulator-based research, and the lack of reliable real-time measurements for the assessment of user trust. This dissertation partially addresses these research gaps.

Our findings show that, before on-road experience, drivers tend to overestimate the capabilities of vehicles equipped with Level 2 systems. After experiencing the vehicles in multiple scenarios, drivers had a better understanding of vehicles’ limitations, resulting in better calibrated trust. Following studies were performed in simulated environments.

Through a first validation study, we made sure that our driving simulator elicited in our participants a strong “sense of presence” - the feeling of truly belonging in the virtual environment. We then gained further insights into the development of appropriate trust calibration. This second study showed that users’ reported trust levels in specific situations did not consistently match evaluations of vehicle reliability given by engineers. In the questionnaire, administered at the end of the study, users offered suggestions for modifications to vehicle dynamics (e.g., lateral control) and the Human Machine Interface (e.g., visualization of the objects detected in the environment). These suggestions are evidence of the way results from Human Factors studies can feed into vehicle design.

Finally, we investigated ways of continuously and reliably assessing user trust. Specifically, we tested whether combining gaze behaviour with electrodermal activity (EDA) could provide an effective measure of trust in a simulated automated vehicle. The results indicated a strong relationship between self-reported trust, monitoring behaviour and EDA: the higher participants’ self-reported trust, the less they monitored the road, the more attention they paid to a non-driving related secondary task, and the lower their EDA. The study also provided evidence that combined measures of gaze behaviour and EDA predicted self-reported trust better than either of these measures on its own. These findings suggest that combined measures of gaze and EDA have the potential to provide a reliable and objective real-time indicator of driver trust in empirical evaluations of Level 2, 3, 4 and 5 automated vehicles.

Reliable technology is not enough for automated vehicles to fulfil their promise. Safety and uptake of automated vehicles depends strongly on the attitudes and behaviours of their potential users. In particular, it is crucial that users develop appropriate levels of trust towards the technology. Cooperation between engineers, vehicle designers and Human Factors researchers can make a vital contribution to the achievement of this goal. The real question is not whether “to trust or not to trust” automated driving technology. What we should be asking is when an automated vehicle can be trusted and, most importantly, what can be done to calibrate user trust with the actual capabilities of the system.