MHF1 - Conversational Agents

Supervisor: dr. Simone Borsci


Conversational agents, such as chatbots and voice interfaces, can be used for multiple purposes e.g., support customer experience with services etc. These new tools are growing and more and more integrated into systems such as websites, social networks, cars. Smart and AI-based conversational agents are shaping the future of human-computer interaction however little is known about how to assess people reaction and satisfaction after the use of these systems.


Building upon previous research you will use the initial version of a new scale to assess satisfaction with chatbots. Your experimental work will focus on the evaluation of conversational agents to further streamline the reliability and validity of the scale.

This assignment is suited for a team of two master students.

Key references

  • Coperich, K., Cudney, E., & Nembhard, H. Continuous Improvement Study of Chatbot Technologies using a Human Factors Methodology.
  • Duijst, D. (2017). Can we Improve the User Experience of Chatbots with Personalisation? MSc Information Studie, Amsterdam.  
  • Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. interactions, 24(4), 38-42.

Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245-250.

MHF2 - The vigilant Brain

Supervisors: dr. Rob Van der Lubbe,  dr. Martin Schmettow


In a recent study, we compared the relevance of different measures derived from the EEG to measure the vigilant state of individuals. With these measures, the major idea was to determine what analysis method is most effective in predicting lapses of attention, which in for example driving conditions may lead to serious accidents. We employed ERPs, Fourier analyses, and ERD/ERS. The employed research paradigm, however, may not have been the most effective. Goal of the MA-project is to develop an improved paradigm, which might simply imply that more non-target stimuli are presented, and which may enable to use the recently developed LPS method (Van der Lubbe & Utzerath, 2013).

MHF3 - Do biological faces trigger the Uncanny Valley effect?

Supervisor: dr. Martin Schmettow


Social robots and other artificial agents should be designed as likeable as possible. A common assumption is that emotional acceptance can be improved by mimicking human appearance and behavior as close as technically possible. However, research in the field of Human-Robot Interaction has revealed a chilling fact:  The emotional responses towards a robot increases only up to a certain point of human-likeness. When a robot face reaches closer resemblance with a human face, the observer experiences the exact opposite: a spine-tingling feeling. This sudden drop in emotional response is called the Uncanny Valley.

In previous studies, participants were asked to rate artificial faces ranging from mechanical to human-like. We could show, that the bizarre Uncanny Valley effect is universal in that everyone seems to experience it. This suggests, that it is deeply rooted in the human mind, which raises the question of its original evolutionary function. Certainly, it is not there to protect us from falling in love with androids.

In your study, you will explore if and how the UV effect arises for:

  • biological faces (apes, ancestors of Homo Sapiens Sap), 
  • modified faces, e.g. cosmetic surgery or photo-shopped

In your introduction, you create a literature overview on related ideas or findings and make a connection with the literature on evolution of face recognition. For your experiment you will create a new collection of stimuli and determine their human-likeness score. You test for the Uncanny Valley on a sample of participants.

Finally, you discuss the implications of your findings with respect to:

  • theories on the effect
  • societal implications of the effect (e.g. racism) 

Interested? Ask Martin Schmettow (

MHF4 - Learning to drive-by-wire

Supervisor: dr. Martin Schmettow


The automation of driving  is on the verge. Everyone knows it will come and one consequence is that the interior of cars are becoming less a of a workplace and get more leisure-oriented designs. For that you need to make some space, first. The bulkiest thing in car cockpit is the steering wheel, which is most obviously being replaced by a joystick-like device. In 2015, a study at Volkswagen AG explored, how long it takes experienced drivers to learn driving-by-wire using a joystick device. 

This project is a replication in our newly acquired BMS driving simulator. The question is: What is easier to learn right from the start for novice drivers, steering wheel or joystick?

In this project you will help design an experimental setup for the driving simulator and a run a learning study to answer the question. In your analysis you will used parametric learning curves to answer the question.

This assignment is suited for a team of two master students.

Interested? Ask