LLMs in decision making

Background

Large Language Models such as ChatGPT seem promising decision support tools. While LLMs offer significant potential benefits, their implementation also presents unique risks. From a human perspective a significant risk is the human tendency to project human traits on machines (anthropomorphism). Since LLMs are particularly humanlike this might impose a significant risk for calibrated trust. From a machine perspective a concern is the phenomenon of hallucinations, where the model generates outputs with fabricated information (Chen & Shu, 2023). This risk is particularly alarming when LLMs are used in high-stakes industries such as healthcare or defense.

Possible research questions

1.      How do characteristics of the LLM such as communication style affect trust in a LLM?

2.      How do individual differences like AI literacy affect the interaction with a LLM?

3.      Can an LLM be used to improve creative decision making?

Information

Please contact Steven Watson (s.j.watson@utwente.nl) when you are interested in this assignment.

Literature

Demszky, D., Yang, D., Yeager, D. S., Bryan, C. J., Clapper, M., Chandhok, S., ... & Pennebaker, J. W. (2023). Using large language models in psychology. Nature Reviews Psychology, 2(11), 688-701.

Roesler, E., Manzey, D., & Onnasch, L. (2021). A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Science Robotics, 6(58). https://doi.org/10.1126/scirobotics.abj5425

Urban, M., Děchtěrenko, F., Lukavský, J., Hrabalová, V., Svacha, F., Brom, C., & Urban, K. (2024). ChatGPT improves creative problem-solving performance in university students: An experimental study. Computers & Education, 215, 105031.