Master Assignment
Trust in Automated Decision making : How user's trust and perceived understanding is influenced by the auqlity of automatically generated explanations
Type: Master M-ITECH
Location: N/A
Period: May, 2018 - March, 2019
Student: Papenmeier, A. (Andrea, Student M-ITECH)
Date final project: March 6, 2019
Supervisors:
Abstract:
Machine learning systems have become popular in fields such as marketing, financing, or data mining. While they are highly accurate, complex machine learning systems pose challenges for engineers and users. Their inherent complexity makes it impossible to easily judge their fairness and the correctness of statistically learned relations between variables and classes. Explainable AI aims to solve this challenge by modelling explanations alongside with the classifiers. With increased transparency, engineers and users are empowered to understand the classifier's behaviour. Other positive effects are user trust and acceptance. Inappropriate trust, however, can be harmful: In safety-critical domains such as terrorism detection or physical human-robot interaction, users should not be fooled by persuasive, yet untruthful explanations. We therefore conduct a user study in which we investigate the effects of truthfulness and accuracy on user trust. Our findings show that accuracy is more important for user trust than transparency. Adding an explanation for a classification result can potentially harm trust, e.g. when adding nonsensical explanations. We also found that users cannot be tricked into having trust for a bad classifier with meaningful explanations. Furthermore, we found a mismatch between observed (implicit) and self-reported (explicit) trust.