https://upload.wikimedia.org/wikipedia/commons/e/e6/Flickr_-_Official_U.S._Navy_Imagery_-_Doctors_perform_surgery_together..jpg

Master thesis

  • MHF1 - ADAPTING AUTOMATED VEHICLE BEHAVIOUR TO USER TRUST: A FOLLOW-UP DRIVING SIMULATOR STUDY

    SUPERVISORS:  PROF. DR. WILLEM VERWEY (UT), DR. FRANCESCO WALKER

    Trust is one of the main factors slowing down the adoption of automated driving technology. This study aims to improve trust by tailoring automated vehicle (AV) behaviour to each user. In the driving simulator of the UT, while being driven by an AV, participants will continuously report how much they trust its behaviour. They will indicate their trust through a slider, with “0” indicating “No trust” and 100 indicating “Full trust”. An alert will be sent to the experimenter if, for 30 seconds straight, the participant’s trust will be x% below or above the previously reported value. The experimenter will decrease or increase AV’s speed by y km/h every time an alert is received. We hypothesize that, when compared to two control groups (Control1: speed stays constant; Control2: speed randomly changes), adapting vehicle’s speed to the user’s trust will lead to higher trust levels in individuals that tend to distrust automation. The experiment will be carried out in the driving simulator of BMS lab.

    If you want to know more about this project, please send an email to Francesco (f.walker@fsw.leidenuniv.nl).

  • MHF2 - THE VIGILANT BRAIN IN A DRIVING SIMULATOR

    PRIMARY SUPERVISOR: DR. VAN DER LUBBE, SECONDARY SUPERVISOR: DR. BORSCI

    35 EC

    In recent MA-projects, we focused on the relevance of different measures derived from the EEG to measure the vigilant state of individuals. With these measures, the major idea was to determine what analysis method is most effective in predicting lapses of attention, which in for example driving conditions may lead to serious accidents. The goal of the current research is to use EEG in combination with a driving simulator and examine whether the EEG is indeed able to predict errors that in real life situations would have led to serious accidents.

  • MHF3 - USING MOTION ANALYSIS FOR FEEDBACK AND ASSESSMENT OF MINIMALLY INVASIVE SURGICAL SKILLS

    SUPERVISORS: DR. SCHMETTOW (UT), DR. GROENIER (TECHNICAL MEDICAL CENTRE, UT), GERRETSEN, MSC (MAASTRICHT UNIVERSITY)

    The aim of this project is to examine how sensor technology can be used to accurately measure flexible bronchoscopy performance. Traditionally medical trainees practice their skills in a master-apprenticeship model in clinical practice under strict supervision and assessment is subjective and potentially biased. Sensor technology has advanced to the point that it allows objective, relevant and consistent performance feedback. Technology-enhanced assessment of performance not only directly benefits the trainee and supervisor but ultimately also the patient. Bronchoscopy is a complex skill consisting of technical skills, e.g., instrument handling, and navigation and diagnostic reasoning, and non-technical skills, e.g., communication and collaboration. Sensor technology can support the growing need for more objective and standardized assessment of technical skill. Also, sensor-based performance metrics can complement the traditional expert evaluation of trainee performance. In the project, we use Xsens 3D motion tracking. A previous student has examined which parameters from the Xsens 3D motion tracking could potentially be used to assess hand motion during flexible bronchoscopy on a simulator. This project will follow up on her work.

    Email: m.groenier@utwente.nl e.gerretsen@maastrichtuniversity.nl

  • MHF4 - RESEARCH ON THE EFFECTIVENESS OF INCISION’S VIDEO-BASED LEARNING TOOL ON SURGICAL SKILL IMPROVEMENT

    SUPERVISOR: DR. SCHMETTOW (UT), VERGOUWEN (INCISION)

    This is a thesis project at the company Incision, thesis in combination with internship might be possible.

    Incision is looking for an ambitious master student who wants to contribute to improving surgical care by putting his/her research skills to the test in a challenging project! At Incision, we have developed an AI-driven video-based learning tool for residents (surgeons in training). This tool allows residents to review their own operative video, which is pre-processed with Computer Vision applications and analysis. Top athletes use video analytics to improve their performance and we are convinced that there is huge potential for surgeons to train in a similar way, although this is rarely done in practice (yet). We have a first version of this tool available, and we are looking for highly driven students who want to research the effect of this tool on surgicalperformance and emotion (e.g. self confidence and frustration) of the users.

    And, are you:

    • Enthusiastic about the possibilities of AI-driven video analysis to create innovative training materials that can be used to improve surgical care worldwide?
    • Curious about the psychology and cognition of skill attainment and performance improvement?
    • Do you want to work in a dynamic international healthcare scale-up atmosphere?

    Then read further about internship possibilities at the Incision team!

     ABOUT INCISION

    Incision enables continuous surgical improvement for individuals and teams by creating high quality knowledge and services. By sharing these, surgical care leads to better outcomes for everyone. We facilitate hospitals and training courses in countries around the world, because we believe everyone deserves the best surgical care.

    ABOUT THE SURGICAID PROJECT

    SurgicAId is a project that aims to research and develop methods to provide tailored feedback to surgeons and the surgical team. This is done by analyzing operative videos of surgical procedures using Artificial Intelligence and Computer Vision technologies, amongst others. Residents will be able to review their personal operative video, which is annotated to moments that stand out. These moments are identified with the use of Artificial Intelligence.

    YOUR ROLE

    Your work at Incision and for the SurgicAId project will focus on researching the effectiveness of our AI-driven video-based learning tool. You will look into the effects of using this tool, both on surgical performance and on the surgeon’s and emotion. You will work in an interdisciplinary team with medical, technical, and business professionals.

    YOUR RESPONSIBILITIES

    • Setting up and designing the research methods
    • Being in contact with the surgeons (supervisors and residents) who participate in the study
    • Data collection (including the analysis of surgical video material, provided to you by our computer scientists, and questionnaires or other measures to measure emotion)
    • Data analysis (measuring the effect on surgical performance and emotion)
    • Writing it down in a master thesis, with the ambition to publish your work in an academic journal

    YOUR PROFILE

    • Master student in Educational (Health) Psychology, Educational Science and Technology, Educational Sciences, Learning and Instruction, or equivalent
    • Intrinsic motivation for research on surgical education and training
    • Average GPA of 7.5/10 or higher
    • Excellent communication skills as you will be interacting with different types of stakeholders such as medical experts, computer scientist, residents, and business people
    • High level of responsibility as you will work with sensitive information and the reliability of your research is essential
    • Since we work with Dutch partner hospitals, fluency in both Dutch and English is required
    • You are able to work independent, accurate, and disciplined

    OUR OFFER

    • A challenging full time internship for a period of 4-6 months (starting date is to be decided on together, preferably between September and November 2021)
    • Direct supervision of one of our team members and collaboration with our interdisciplinary
    • SurgicAId team which consists of a combination of medicine, AI, and business professionals
    • Working in a dynamic and innovative health tech company: we work together on products that have an impact on healthcare
    • Internship remuneration of 600 euros per month
    • An office in a central, easily accessible location in Amsterdam (located in the Tropenmuseum)
    • An energetic, young, and motivated team who are driven to make a difference in surgical training

    INTERESTED?

    For questions or further information, contact Robin Vergouwen (vergouwen@incision.care). To apply, please send your resume and cover letter to Robin.

  • MHF5 - WHAT FEATURES OF A FACE ARE RESPONSIBLE FOR THE UNCANNY VALLEY EFFECT?

    SUPERVISOR: DR. SCHMETTOW

    It is expected that robots will soon appear in the areas of senior and health care, where they support staff (e.g., lifting of patients) and act as social companions (e.g., for elderly and handicapped persons). But, people are sometimes skeptical towards technology, especially when they don’t understand it. So, an important condition of success is that robots are accepted by the clients.

    It is commonly assumed, that making a social robot more human-like in appearance and behavior will improve acceptance. However, there is a problem with that approach: The emotional response towards a robot increases only up to a certain point. When a robot face reaches close resemblance with a human, without being undistinguishable, the emotional response takes a sudden drop. This is called the Uncanny Valley, and the cognitive mechanisms of this strange phenomenon are currently unclear. Mathur & Reichling (2016) provide evidence that the Uncanny Valley exists. In a number of own studies, we have shown that the Uncanny Valley effect is rooted in early visual processing and is practically universal, i.e. there aren’t any individual differences.

    For the clarification of underlying cognitive mechanisms it is crucial to understand what features of a face are responsible for the effect. For this purpose, you will design and run an experiment using eye tracking.

    This project is intended for a team of two students. Interested? Ask Martin Schmettow (m.schmettow@utwente.nl)

    Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, 22–32. https://doi.org/10.1016/j.cognition.2015.09.008

    Koopman, R. (2019, January). The Uncanny Valley as a universal experience : a replication study using multilevel modelling. Retrieved from http://essay.utwente.nl/77172/

    Geue, L. (2021, July). From Robots to Primates : Tracing the Uncanny Valley Effect to its Evolutionary Origin. Retrieved from http://essay.utwente.nl/87564/

    Slijkhuis, P. J. H. (2017, June). The Uncanny Valley Phenomenon: A replication with short presentation times. Retrieved from http://essay.utwente.nl/72507/

  • MHF6 - OPTIMIZATION OF HUMAN-ARTIFICIAL INTELLIGENCE INTERACTION IN RADIOLOGY: A CASE STUDY FOR THORACIC DISEASES

    PRIMARY SUPERVISOR: DR. BORSCI, SECONDARY SUPERVISOR: DR. SCHMETTOW

    In collaboration with The Netherlands Cancer Institute (Dr Stefano Trebeschi, PhD), 1 internship (10EC) and thesis (25 EC) is offered.

    Interested candidates should send an expression of interest and a brief curriculum to Dr Borsci by 25th September.

    PROBLEM DESCRIPTION

    The last decade has seen an exponential development and research of application of artificial intelligence (AI) in the field of radiology and medical imaging. These applications range from diagnostic to prognostication, where the AI is tasked to recognize factors in the image that could potentially lead to increased survival chances for the patient.

    Despite the huge interest and success of AI-algorithms in the field, their utilization in daily clinical and medical research tasks is still limited. While many factors can contribute to this problem, a predominant one is the lack of a proper interaction system suitable for non-AI experts, like clinicians. In fact, even in the most optimistic case where trained algorithms are made available online, this is limited to the code and parameters to load the model on your computer. This requires prior knowledge in programming and in the development tools for artificial intelligence, which practically prevents clinicians from benefiting from recent AI-advancements. 

    PROPOSED SOLUTION

    We aim to study and implement the optimal human machine interaction between AI and non-AI scientists and clinicians. To this end, we focus on the recent developments of AI-algorithms in thoracic diseases, namely asbestosis and mesothelioma. The project will be a collaboration between the Netherlands Cancer Institute/Antoni van Leeuwenhoek Hospital and the University of Twente.

    USE CASES

    We highlight two research projects. The first project is for the diagnostic work-up of former construction workers for asbestosis — if positive, they are eligible for government support. Normally, the diagnostic work-up would be performed by a panel of three specialized doctors (i.e. pulmonologists). Within our group we developed a group of algorithms for automatic patient assessments. The second project is for the diagnostic and follow-up of mesothelioma patients receiving treatment. Normally this would be done with measurements of diameters, but this has been proven suboptimal in mesothelioma. Our group has proposed the use of an AI-algorithm for the accurate assessment of the tumor burden and the progression of mesothelioma patients receiving treatment.

    Out of these three projects, we make three use-cases, depending on possible scenarios of usage and the interaction of the user.

  • MHF7 - ARTIFICIAL INTELLIGENCE CONVERSATIONAL AGENTS: A MEASURE OF SATISFACTION IN USE

    SUPERVISOR: DR. SIMONE BORSCI

    Background

    Conversational agents, such as chatbots and voice interfaces, can be used for multiple purposes e.g., support customer experience with services etc. These new tools are growing and more and more integrated into systems such as websites, social networks, cars. Smart and AI-based conversational agents are shaping the future of human-computer interaction however little is known about how to assess people reaction and satisfaction after the use of these systems.

    Goals

    Advance previous work done on a new scale to assess satisfaction with chatbots. Your experimental work will focus on the evaluation of conversational agents to further streamline the reliability and validity of the scale.

    Your work will consist of testing with a remote usability test different chatbots with a set of tools, including the new scale to perform a confirmatory factorial analysis. You should be aware of statistical methods regarding factorial analysis and be able to use R. The target is to involve at least 100 participants working (potentially) in a team.

    Key references

    • Coperich, K., Cudney, E., & Nembhard, H. Continuous Improvement Study of Chatbot Technologies using a Human Factors Methodology.
    • Duijst, D. (2017). Can we Improve the User Experience of Chatbots with Personalisation? MSc Information Studie, Amsterdam. 
    • Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. interactions, 24(4), 38-42.
    • Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245-250.

Master thesis and Internship

  • MHF8 - MONITORING MULTI-SUBJECT BIOFEEDBACK EXPERIMENTS

    FIRST SUPERVISOR: DR. BORSCI, SECOND SUPERVISOR: DR. SCHMETTOW

    Company

    Noldus Information Technology was established in 1989 by Lucas Noldus, CEO of the company. With a Ph.D. in animal behavior from Wageningen University, he developed the company’s first software tool during his research in entomology. Noldus has strived to advance behavioral research ever since, evolving into a company that provides integrated systems including software, hardware, and services.

    We now offer a wide range of solutions for research in animal and human domains, including biology, psychology, marketing, human factors, and healthcare. We work with leading suppliers and develop innovative, state-of-the art products. We also offer excellent technical support and customer care. As a result, our systems have found their way into more than 9700 universities, research institutes, and companies in almost 100 countries.

    The success of our company is determined to a large extent by the enthusiasm and creativity of our employees. We encourage each other to think outside the box, which leads to unique products and services for our customers.

    Description

    Noldus is working on a platform for behavioral data collection and analysis for research applications. The data is generated by in-house developed or third-party sensors and consists of audio, video, time-stamped event data (with and without duration), time-stamped continuous numerical values. This data is collected for one or more participants at a time. The test leader is usually physically separated from the data collection location. In a lab experiment the test leader will be in the control room and the participants in the test rooms. Participants can also participate from home.

    In all those use cases the test leader needs to be able to monitor and control the data collection sessions. This requires knowledge of the progress of each participant, the connection status of all sensors and systems, and the quality of the data collected.

    The assignment is to design and validate the UI for the test leader.

    Interested candidates could Contact Dr. Simone Borsci Sending their CV and their expression of Interest by the 30 of September 3 pm.