Project description 

The deadly accident with a Tesla car last year, led to serious concerns about self-driving cars. Is it safe when such autonomous systems make unsupported decisions by themselves? And when an accident occurs, who is to blame?

 In all, it is clear that technological developments are not enough to guarantee a safe and productive partnership between humans and robots. As this partnership takes place in a social context and often complex environments, insight is needed into how humans perceive and interact with robots in order to prevent both over- and under-reliance.

 This project builds on research on inter-human advice-taking, which identified factors affecting acceptance and use of advice of others. These studies revealed a range of factors that people take into account, such as characteristics of the source, type of task and justification of the advice (Tzioti, Wieringa & Osselaear, 2014). A main result is therefore that an advice is not just passively followed, but people infer all kinds of information as to assess its relevance for the task at hand.

Research questions

 The central question of the present project is how people evaluate the advice of robots and how this relates to a human advice. Possible research questions are:

 Do people trust a human advice more than a system advice?

  1. Which factors affect level of trust?
  2. How do task characteristics affect cooperation?
  3. Is knowledge of the underlying decision rule or level of uncertainty relevant for acceptance?


Experimental research

initial literature

Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015, March). Sacrifice One For the Good of Many?: People Apply Different Moral Norms to Human and Robot Agents. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction (pp. 117-124). ACM.

Tzioti, S. C., Wierenga, B., & Osselaer, S. M. (2014). The effect of intuitive advice justification on advice taking. Journal of Behavioral Decision Making, 27(1), 66-77.