HMI Student Assignments

If you as a bachelor or master student are looking for a final thesis assignment, capita selecta, research topic, or internship, you can choose between a large number of internal (at HMI) or external (at a company, research institute, or user organisation) assigments. You can also choose to create one yourself. Searching a suitable location or a suitable topic often starts here: below you find a number of topics you can work on.  

Note that in preparation for a final MSc assignment you must carry out a Research Topics resulting in a final project proposal. 

ASSIGNMENTS AND TOPICS THAT CAN BE CARRIED OUT INTERNALLY AT HMI

  • Hybrid AI Assistant for Fair and Transparent Essay Assessment @ UTwente

    Hybrid AI Assistant for Fair and Transparent Essay Assessment @ UTwente

    A common task across various fields involves reviewing documents or images based on specific content-related criteria. A prime example is the evaluation of student assignments, where lecturers must ensure fairness (free from bias), transparency (aligned with clear expectations communicated to students), and reliability (delivering consistent, repeatable results). However, assessing a large volume of textual assignments is time-consuming, and the repetitive nature of the task can potentially compromise the reliability and accuracy of assessments.

     This master’s project will first investigate the strengths, weaknesses, and potential applications of various language models. A detailed analysis of best practices for prompting will be conducted, focusing on its use in a continuously interactive process. The project aims to design and validate a language model-based assistant, incorporating a conversational agent to create a hybrid intelligent practice for assessment. By engaging with lecturers—offering feedback, posing questions, or suggesting corrections—this tool will enhance engagement, flexibility, and personalization in the assessment process, while mitigating bias and ensuring consistency. The system will support lecturers in efficiently evaluating and correcting textual essays, all while maintaining fairness, transparency, and reliability.

    Contact:

    Shenghui Wang (shenghui.wang@utwente.nl) and Marcus Pereira Pessoa (m.v.pereirapessoa@utwente.nl)

  • Thesis assignment Animal Computer Interaction: Digital Enrichments for Capuchin Monkeys at Apenheul Primate Park @ HMI - Enschede, NL in collaboration with Apenheul - Apeldoorn, NL

    Animal Computer Interaction - Digital Enrichments for Capuchin Monkeys at Apenheul Primate Park

    Capuchin monkeys are highly intelligent primates; they are very social, learn by doing and through observing conspecifics, have cultural traditions and some capuchin species even use tools. At Apenheul Primate Park, two capuchin species (Sapajus and Cebus) live in natural, forested habitats that provide many opportunities for foraging, climbing and exploring. However, during the winter period, the animals cannot always go outside due to the cold weather. Therefore, Apenheul provides these animals with environmental enrichment in their inside enclosures to tailor to their behavioural needs, such as foraging and exploring.

    Apenheul is always on the look-out for new and innovative ways to stimulate (more) natural behaviour and meet the behavioural needs of all animals. That is why a collaboration with the department Human Media Interaction (HMI) of the University of Twente was established in order to design and create innovative, interactive enrichment. Digital interactions allow for altering interactions over time, promoting longer-term novelty effects and/or limit habituation, as well as trigger new dynamics such as social interactions. However, the field of Animal Computer Interaction seems to tap into a so far under-explored intersection of tangible embodied interaction, interactive play, and animal behaviour.

    In the past two years, we have worked with master students HMI on the challenge to design innovative, interactive environmental enrichment for capuchin monkeys (e.g. see thesis on open-ended interaction through multiple devices and the first thesis on tangible multi-purpose modules) and this brought forth two working prototypes. We are soon again looking for a master's student for this project to continue working on designing interactive digital enrichment for capuchin monkeys.

     What we look for in the student:

    • Affinity with animals, welfare and animal behaviour;
    • Flexible planning: caretakers work in very seasonal ways which can lead to logistical challenges and delays, note that we continue in Academic year 25-26 with this project again;
    • Willingness to travel to Apeldoorn on several occasions, collaborating with the behavioural biologist and animal caretakers of Apenheul, observing monkey behaviour and setting up user tests with capuchin monkeys on-location;
    • Affinity with ethical challenges, doing animal computer interaction research inherently brings challenges related to ethical practices, as animal consent will always work differently than regular consent from users. This also requires students to work in structured ways and a mindful and respectful attitude towards the “users’” point of view.
    •  Experience or skills to adapt and create monkey-proof, tangible systems; iterative processes, safety, and natural interaction are not the best partners but provide an interesting challenge on how to design and engineer physical arduino/raspberry-pi sensor actuator combinations.

    Contacts:

  • Dining with Data: Developing machine learning models to understand social eating behavior @HMI - Enschede, NL

    A balanced diet is essential for good health. Finding and maintaining a balanced diet is a challenging task. Many forms of support exist that assist this process on an individual level. However, the majority of our food is consumed in a social setting. Many overt and covert dining interactions take place in these settings that can affect the adherence to personal dietary goals: Feeling obligated to take food from a plate that is passed around, unconsciously adjusting the portion size to a table member or adjusting the eating speed to others are a few of these examples. To design more effective dietary support systems, it is of great importance to gain more knowledge about eating behaviors of individuals in a social setting, such as the quantity of the food and the timing of food intake in relation to others.

    An instrumented dining table (Sensory Interactive Table - SIT) has been developed to capture eating behavior in unobtrusive, in-the-wild settings. 199 load cells track weights shifts throughout the meal. However, converting this data into meaningful insights on eating behavior poses computational challenges. Algorithms and machine learning models not only need to classify the data that is recorded from a single load cell, but need to combine single load cell data with data that is recorded on surrounding load cells as well, to make sure that dynamic situations at the dining table are interpreted correctly.

    In this project, we will develop machine learning models that accurately identify objects on the table and the interactions that take place, such that the recorded weight changes can be translated into meaningful insights on dynamic eating behavior.

    Contact: Juliet Haarman – j.a.m.haarman@utwente.nl

  • Sensory Interactive Table – an interactive dining table to support your eating behavior @ HMI- Enschede, NL

    Eating is more than the consumption of food. Eating is often a social activity. We sit together with friends, family, colleagues and fellow students, to connect, share and celebrate aspects of life. Sticking to a personal diet plan can be challenging in these situations. The social uncomfortableness that is associated with having a different diet than the rest of a group greatly contributes to this. Additionally, it is well known that we unconsciously influence each other while we eat. Not just in the type of food that we choose, even the quantity of the food that we consume, or the speed with which we consume the food is affected by our eating partners.

    The interactive dining table is created to open up the concept of healthy eating in a social context: where individual table members feel supported, yet still experience a positive group setting. The table is embedded with 199 load cells and 8358 LED lights, located below the table top surface. Machine learning can be applied to the sensor data from the table to detect weight shifts over the course of a meal, identify individual bite sizes and classify interactions between table members and food items. Simultaneously, the LEDs can be used to provide real-time feedback about eating behavior, give perspective regarding eating choices, or alter the ambience of the eating experience as a whole. Light interactions can change over time and between settings, depending on the composition of the table members at the table or the type of meal that is consumed at the table.

    This project aims to create more knowledge about the LED interactions. What effect do the interactions have on the behavior of the user?  What type of interactions work best for what type of user? How should interactions be shaped, such that a single subject in a group feels supported? How can we implicitly steer people towards healthy(er) behavior, without coercion, or without putting the emphasis of the meal on it?

    Contact: Juliet Haarman - j.a.m.haarman@utwente.nl

  • Shut up and finish your plate: Designing tangible technology to support children in their mealtime struggles @ HMI - Enschede, NL

    Maintaining a nutritionally balanced diet can be challenging in households with young children. Picky eating, limited food intake, and children being distracted during dinner are a few of the many challenges that parents face during mealtime. As a result, family mealtime can be stressful and experienced as a struggle by both the child and the parent. It often leads to conflicts and tension at the dinner table, thereby negatively impacting the mental health of child and parent. In the long run, this could result in feelings of isolation and low self-esteem at the side of the child. Moreover, an unbalanced diet puts the child at risk of nutrient deficits and obesity.  Research has shown that children who show these eating behaviors are likely to stick to these behaviors during adulthood. However, the younger the child, the better its changes to battle these behaviors, and with that increase its changes for good health

    In this project we will research in what way we must design interactive technology (tangible objects used in the dining setting) for it to stimulate healthy eating behaviors at the side of children, while at the same time support a positive social family setting during mealtime.

    Contact: Juliet Haarman – j.a.m.haarman@utwente.nl

  • Enhancing Human-Robot Collaboration for Object Handover to Assist Individuals with Visual Impairments @ HMI - Enschede, NL

    Human-robot collaboration (HRC) has made significant strides in various fields, including manufacturing, healthcare, and personal assistance. However, there is a critical need to develop advanced HRC systems that can effectively assist individuals with disabilities, particularly those with visual impairments. Object handover, a fundamental aspect of HRC, requires precise coordination and communication between the human and the robot. For visually impaired individuals, traditional methods of object handover often need to be improved due to the lack of visual cues. This thesis aims to explore and develop innovative solutions to enhance object handover processes through multisensory feedback, machine learning, and adaptive algorithms.

    The primary objective of this thesis is to develop and refine a robust human-robot collaboration framework tailored explicitly for facilitating object handover tasks for individuals with visual impairments. This research addresses the challenge of creating intuitive and effective interaction protocols that enable seamless communication between the user and the robot, ensuring safe and reliable object handover. By integrating various sensory modalities and designing intuitive feedback mechanisms, the aim is to enhance the overall usability and effectiveness of assistive robotic systems in improving the autonomy and quality of life for visually impaired individuals.

    Tasks:

    This thesis seeks to investigate, design, and evaluate a novel framework for human-robot collaboration specifically tailored to address the needs of individuals with visual impairments during object handover tasks. The tasks include:

    1. Investigating existing human-robot collaboration techniques and technologies.
    2. Identifying the unique requirements and challenges associated with object handover for individuals with visual impairments.
    3. Designing and implementing a collaborative framework that integrates both human and robot capabilities to facilitate efficient and intuitive object handover interactions.
    4. Evaluating the proposed framework's effectiveness, usability, and user satisfaction through user studies and performance metrics.
    5. Providing insights and recommendations for the future development and deployment of assistive technologies in real-world settings.

    Expected outcomes:

    • A comprehensive understanding of the requirements and challenges associated with object handover for individuals with visual impairments.
    • A novel collaborative framework that enhances human-robot interaction for object handover tasks and a functional prototype of an HRC system designed explicitly for object handover to visually impaired individuals.
    • Insights into the usability and user experience factors crucial for the design of assistive technologies for individuals with visual impairments by showing empirical data on the effectiveness and user satisfaction of the developed system 
    • A set of guidelines and best practices for developing assistive HRC systems and recommendations for integrating and deploying the proposed framework in real-world scenarios to improve the independence and quality of life for individuals with visual impairments.

    Required Skills and Expertise:

    • Background or strong interest in robotics and human-robot interaction.
    • Experience with machine learning and adaptive systems.
    • Knowledge of assistive technologies and accessibility issues.
    • Proficiency in programming languages such as Python, C++, or MATLAB.
    • Communication and interpersonal skills for conducting user studies.

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • Exploring Intrinsically-Motivated Reinforcement Learning for Robots: Leveraging Effectance Motivation Theory @ HMI - Enschede, NL

    In recent years, robotics has seen significant advancements, particularly in reinforcement learning (RL). RL algorithms enable robots to learn complex tasks through trial and error, often guided by external rewards. However, traditional RL approaches may struggle in environments with sparse or delayed rewards, hindering their applicability to real-world scenarios. To address this limitation, researchers have turned to intrinsically motivated RL (IMRL), which aims to imbue agents with an inherent drive to explore and learn independently of external rewards. Effectance motivation theory, originating from psychology, posits that individuals are intrinsically motivated to interact with and master their environment, driven by a desire for competence and autonomy.

    This thesis investigates the development and application of intrinsically motivated reinforcement learning techniques for robotic systems, drawing inspiration from effectance motivation theory. Specifically, the research will focus on designing algorithms that enable robots to autonomously explore and learn in complex, dynamic environments, with minimal reliance on extrinsic rewards. Possible research questions are:

    1. How can effectance motivation theory be translated into computational frameworks for intrinsically motivated RL in robotic systems?
    2. What are the key challenges and limitations of existing IMRL approaches, and how can they be addressed to enhance the performance and robustness of robotic agents?
    3. How do different forms of intrinsic motivation, such as novelty-seeking and competence-driven exploration, influence robotic agents' learning behavior and performance?
    4. Considering hardware constraints and operational requirements, how can the developed IMRL algorithms be integrated into real-world robotic platforms?

    Tasks: The thesis will adopt a multidisciplinary approach, drawing from insights in psychology, neuroscience, and machine learning. The tasks include

    1. Conducting a comprehensive literature review to explore existing IMRL techniques and their underlying psychological principles. 
    2. Developing novel IMRL algorithms, incorporating elements of effectance motivation theory to enhance exploration and learning in robotic agents. 
    3. Evaluating the algorithms through simulated experiments in diverse environments and real-world trials on physical robotic platforms.

    Expected Outcome:

    • Development of novel intrinsically motivated RL algorithms inspired by effectance motivation theory.
    • Insights into the mechanisms underlying intrinsic motivation in robotic agents, elucidating their impact on learning and exploration.
    • Practical guidelines for implementing and deploying IMRL techniques in real-world robotic systems.
    • Advancement of the broader field of autonomous robotics by addressing challenges related to exploration and adaptability.

    Skills and requirements:

    • Background or strong interest in robotics and reinforcement learning
    • Proficiency in machine learning and artificial intelligence
    • Understanding of psychological theories, particularly effectance motivation theory
    • Experience with algorithm development and optimization in programming languages such as Python, C++, or MATLAB.
    • Proficiency in simulation and experimentation, both in simulated and real-world environments

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • Large Language Model-Based Sport Coaching System Using Retrieval-Augmented Generation and User Models @ HMI - Enschede, NL

    Background:
    Large language models are advanced artificial intelligence systems trained on vast amounts of text data, capable of understanding and generating human-like language. These models, such as OpenAI's GPT series, use deep learning techniques to capture linguistic patterns and generate relevant text across a wide range of tasks and domains. Using prompt engineering, these models can be instructed to generate natural language coaching strategies that can be used by artificial agents (chatbots, social robots, virtual agents). However, the generated responses need to be personalized to the user. User models in the context of AI systems refer to representations of individual users' characteristics, preferences, and behaviors, which are used to personalize interactions and tailor recommendations. These models enable the system to adapt its responses or actions based on factors such as user demographics, past interactions, and stated preferences. We can use these user models to tailor the model's responses using retrieval-augmented generation. This technique combines generative models with retrieval-based methods to improve the quality and relevance of generated content. In this approach, the system retrieves relevant information or context from a pre-existing database or corpus. It incorporates it into the generation process, ensuring that the generated output is grounded in real-world knowledge or context.

    Task:
    Design and implement a sports coaching (recommendation) system that utilizes large language models, retrieval-augmented generation (RAG) techniques, and user models to provide personalized coaching and guidance to trainees. The project aims to evaluate the practical application of pre-trained models in sports coaching without retraining using RAG techniques.

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • A Comparative Study of Touch Feedback in Coaching Scenarios: Haptic Vest vs. Embodied Social Robot in Squatting Exercises @ HMI - Enschede, NL

    This project aims to compare the effectiveness of touch feedback provided by a haptic vest versus an embodied social robot in coaching squatting exercises. You will investigate how tactile feedback influences user performance, technique improvement, and overall user experience in physical training scenarios.

    • Tasks:
      Design an experimental setup for comparing touch feedback provided by a haptic vest and an embodied social robot during squatting exercises.
    • Research the relevant parameters to be measured, such as squatting technique, user comfort, perceived exertion, and motivation.
    • Develop a haptic vest prototype capable of providing tactile feedback corresponding to squatting movements, such as vibrations or pressure sensations.
    • Implement an embodied social robot system with sensors and actuators to provide physical guidance and feedback during squatting exercises.
    • Evaluate the effectiveness of these technologies in a user experiment.

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • Non-stationary preference learning in online human-robot interaction @ HMI - Enschede, NL

    The quest for autonomous and adaptive behavior in robotics has led to a growing interest in Preference-Based Reinforcement Learning (PBRL), which leverages human preferences to guide learning, particularly in dynamic and unpredictable environments. PBRL stands as a bridge between the advanced learning capabilities of robots and the nuanced preferences humans possess, although it faces challenges due to the non-stationary nature of human preferences. Non-stationary dueling bandits, however, offer a compelling solution by incorporating mechanisms to adapt to changing preferences over time. These adaptive algorithms enable robots to maintain high adaptability, providing a more personalized and satisfying user experience, especially in long-term interactions. Challenges in this field include efficiently sampling users' limited feedback, balancing exploration and exploitation, incorporating context-dependent preferences, handling noisy feedback, and ensuring learned preferences lead to meaningful and safe robot behavior. This innovative approach holds promise for creating robotic systems that evolve alongside dynamic human preferences, with applications spanning various domains such as assistive robots, rehabilitation robotics, social companionship, interactive learning environments, manufacturing, collaborative work, and entertainment.

    The tasks involve:
    (1) familiarizing oneself with literature on preference-based reinforcement learning or adaptation algorithms in non-stationary situations,
    (2) reviewing literature on adaptive robot behavior in human-robot interaction scenarios and selecting a preferred use case,
    (3) developing a human-robot interaction use case where adapting to user preferences is crucial and can vary over time,
    (4) selecting a suitable online preference learning algorithm for the chosen use case and implementing it,
    (5) conducting a user study to evaluate the effectiveness of adapting to changing preferences and user experience, and finally,
    (6) analyzing the data from the study and presenting the results in a report.

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • Lifestyle Data Futures @ HMI - Enschede, NL

    Non-communicable diseases (NCDs), also known as chronic diseases, are noninfectious or non-transmissible diseases, such as cardiovascular diseases, diabetes, cancers, or chronic respiratory diseases [1]. These are usually associated with a person's lifestyle or behavior (for example, bad diet, physical inactivity, smoking). According to the World Health Organization, non-communicable diseases (NCDs) are the leading cause of death and disability in the world, causing 41 million deaths every year [2,3]. This is equivalent to 74% of all deaths globally [3]. Various risk factors contribute to NCDs. These include modifiable behavioral risk factors such as physical inactivity, unhealthy diet, and alcohol or tobacco use; metabolic risk factors such as high blood pressure, obesity, high blood glucose levels, and high levels of fat in the blood and environmental risk factors such as air pollution [2-6]. NCDs can be prevented, or the risk can be reduced by managing these risk factors. Communication of the risk factors and stimulating a healthy lifestyle and change in behavior are vital for achieving this. Current means of communicating these risk factors and lifestyle change recommendations are mainly manual or dominantly app-based (either mobile or desktop screen-based apps). While app-based systems are easy to create and deploy, existing research has shown limitations such as display blindness, attention overload through notifications, low recall in content, and lack of social and contextual situated information [7].

    Tangible user interfaces (TUIs), Data Physicalisations (physical representations of data) [8] and embedded data representations [9] offer a unique opportunity to address this issue in a different way. This project aims to explore the potential of tangible user interfaces and data physicalizations/ sensifications for:

    • communication of data associated with modifiable lifestyle risk factors (e.g. physical activity data, blood sugar level, etc.)
    • designing interventions for stimulating lifestyle/ behavior change toward managing risk factors associated with NCDs
    • data experiences embedded in everyday spaces >> we are particularly interested in making data and interventions ambient

    You will explore various physical, tangible means for communicating risk factors and potential interaction strategies and design future lifestyle data experiences.

    Lifestyle Data Futures offers a diverse set of student assignments (MSc thesis, Capita Selecta) and can take various forms (e.g developing interactive systems, exploring how edge devices, sensing and actuation can be used to realize interventions, empirical studies to evaluate interventions, participatory design projects to design potential interventions, etc.).

    This thesis calls for students with various diverse backgrounds such as Smart technology, Electrical engineering, Human computer interaction, Interaction technology, Embedded systems, Ubiquitous computing, or Biosignals & Systems, etc.

    Would like to know more? Contact: Champika Ranasinghe, c.m.eparanasinghe@utwente.nl

    References:

    [1].  United Nations Children's Fund (UNICEF), (April, 2021). Non-communicable diseases. [Online]. Available at: https://data.unicef.org/topic/child-health/noncommunicable-diseases/, last accesed on 25-11-2023.

    [2].  Pan American Health Organization, Regional Office for the Americas of the World Health Organization(n.d.). Noncommunicable Diseases. [Online]. Available at: https://www.paho.org/en/topics/noncommunicable-diseases, last accesed on 25-11-2023.

    [3].  World Health Organization, (Sep, 2023). Noncommunicable Diseases. [Online]. Available at: https://www.who.int/news-room/fact-sheets/detail/noncommunicable-diseases, last accessed on 25-11-2023.

    [4].  Al-Maskari, F, United Nations (n.d.). Lifestyle Diseases: An Economic Burden on the Health Services. [Online]. Available at: https://www.un.org/en/chronicle/article/lifestyle-diseases-economic-burden-health-services, last accesed on 25-11-2023.

    [5].  Centers for Disease Control and Prevention, U.S. Department of Health & Human Services (Oct 2020). Lifestyle Risk Factors. [Online]. Available at: https://www.cdc.gov/nceh/tracking/topics/LifestyleRiskFactors.htm#anchor_1606671360465, last accessed on 25-11-2023.

    [6].  Budreviciute, A., Damiati, S., Sabir, D. K., Onder, K., Schuller-Goetzburg, P., Plakys, G.,, Katileviciute, A., Khoja, S. & Kodzius, R. (2020). Management and prevention strategies for non-communicable diseases (NCDs) and their risk factors. Frontiers in public health8, 788.

    [7].  Brombacher, H., Houben, S., & Vos, S. (2023). Tangible interventions for office work well-being: approaches, classification, and design considerations. Behaviour & Information Technology, 1-25.

    [8].  Jansen, Y., Dragicevic, P., Isenberg, P., Alexander, J., Karnik, A., Kildal, J., ... & Hornbæk, K. (2015, April). Opportunities and challenges for data physicalization. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 3227-3236).

    [9].  Willett, W., Jansen, Y., & Dragicevic, P. (2016). Embedded data representations. IEEE transactions on visualization and computer graphics, 23(1), 461-470.

  • Virtual Reality for Motor Learning in Sports @ HMI - Enschede, NL

    Virtual Reality holds great promise for skill acquisition and motor learning in sports. Virtual Reality is particularly well suited for creating systematically varied, controlled environments in which the athlete can safely practice complex motor movements. Moreover, VR allows for the provisioning of rich (visual) augmented feedback that goes beyond what is possible in real life – flexibly blending qualities from both the digital and the physical world. Finally, modern-day Virtual Reality setups are powered by accurate trackers that allow for the automatic measurement of motor behavior – recording objective measurements of performance.

    Despite these positive qualities, little is known about the factors that promote or thwart the transfer of motor learning from the Virtual Reality training context to the real-world performance context. This is especially true for situations wherein the nature of the VR feedback is different from the feedback that is typically given in sports practice.

    In sports, the demands on transfer are high as motor behavior in sports needs to be swift and precise to be successful. In this graduation project, you will design and work with Virtual Environments that enable you to investigate the elements that impact motor learning, either positively or negatively.

    If you are interested, reach out to Dees Postma: d.b.w.postma@utwente.nl

  • The Design and Development of Interactive Technology to Benefit the Diagnostic Process for Autism @ HMI - Enschede, NL

    Developmentally diverse individuals, including people with autism, exhibit movement coordination patterns (both individual and during interactions with others) that are different from their typically developing peers. Autistic individuals often show lower levels of coordination, atypical gait patterns, and are less physically active.

    As autism is increasingly diagnosed at adult ages, and females are more likely to be un-, mis- or late diagnosed, objective motor assessments will aid autism diagnosis. As such, we will design and develop interactive technologies that systematically measure diagnostically relevant motor behaviours. We will design the technology to be easy to use and scalable. We will use state-of-the art motion capture systems (e.g., XSens, OpenPose, MediaPipe) to sense motor behaviour and use persuasive technologies, such as interactive projections, tangibles, and wearables to elicit diagnostically relevant motor behaviours.

    We will start with a systematic literature review on the movement- based markers of autism. Then, we will use these markers to design meaningful and ecologically valid activities that embody the movement-markers for autism.

    If you are interested, reach out to Dees Postma: d.b.w.postma@utwente.nl

  • Designing Post-Run Reflective Experiences To Support Learning of Movement Through a Dashboard Interface @ HMI - Enschede, NL

    Introduction:
    In our previous studies, we discovered that runners appreciated the availability of drone-captured run videos as a means to facilitate self-reflection. Reflecting on sports activities through the use of video has proven to be a highly effective method for athletes to enhance their performance across various sports [1-4]. With the recent technological advancements in drone technology and video processing, we now have the capability to capture videos of runners and extract essential running parameters from these videos without encumbering the athletes. However, they also expressed a desire for these videos to offer insights into their performance, including the identification of moments of error during their runs and guidance on how to improve. To address this, they proposed the integration of such information into a dashboard interface.

    Traditionally, dashboards are GUIs designed to present critical information within extensive datasets. Recent developments in big data handling have transformed dashboards into interfaces composed of various charts and numerical representations. This design choice also extends to applications related to running, where the focus is on presenting extensive time series data through simplified graphs and visuals [5]. In the realm of sports data, these numerical insights often hold significant meaning, yet presenting this information in a manner that effectively conveys its significance remains an underexplored avenue, offering a unique opportunity for research. This serves as the foundation of the project—an exploration of an “unconventional” dashboard, departing from prevailing trends. The aim is to create a dashboard that effectively communicates the underlying significance of running data through videos, enriching the post-run reflection experience for runners with a focus on promoting motor skill learning.

    While not utilizing a dashboard interface, a compelling source of inspiration for students considering this project is 'Illuminate' [6]. This installation serves the purpose of aiding individuals in visualizing their movements through real-time visuals, rendering visible what remains 'invisible' to the observer. The overarching goal was to offer users a visceral experience, one that deeply resonates with their inner senses, to push the boundaries of their sensory perception. Similarly, the objective of this project aligns with this concept, aiming to craft an experience that evokes profound emotions, ultimately assisting runners in comprehending and enhancing their runs. In pursuit of these objectives, students will be expected to apply somaesthetic design methods and draw from various theories pertaining to motor learning.

    Somaesthetic design compels designers to consider the bodily experiences, sensations, and feelings of the relevant users while developing their concepts [7]. To gain context and inspiration for utilizing somaesthetic design, students can examine existing work in this field [8]. Additionally, students can tap into various pedagogical approaches, such as self-modelling, as well as theoretical perspectives on motor learning to inform their design decisions [9]. Two prevailing theories in motor learning are the representational and anti-representational theories. The representational theory posits that motor learning is internalized, akin to forming a mental model, where athletes break down their movements into smaller components and train them individually. Conversely, the anti-representational theory suggests that athletes contemplate all the possible actions they can undertake to support their motor learning. Running typically unfolds as a fast-paced experience, often reaching a point where everything around the runner seems to slow down. Incorporating somaesthetic principles into the design process can help translate the runners' intricate feelings and sensations into meaningful representations, thereby offering valuable insights for the design process. Runners have also noted that when they are in the midst of a run, they often enter a meditative state, wherein their mental focus turns inward. Designers can leverage this aspect of the running experience through somaesthetic methods, enhancing the overall design process.

    Goal:
    The aim of this project is to develop an interactive system that enables runners to gain insights into their movement through visual presentations on a dashboard. This system will be designed with the primary goal of enhancing the running experience and improving performance, supported by drone-captured videos. As the central focus here is on self-learning through visual aids, runners should have the ability to discern the visuals effectively. To drive this interactive experience, running data processed using drone videos will be used to help runners become more attuned to the actions they need to perform (education of intention) or to provide information that guides their running behaviour (education of attention).

    Student Tasks: (Suggestions in no particular order)

    1) Literature Review: Conduct a literature review on topics related to theories of motor learning, self-modelling, self-reflection through videos, soma design, key performance indicators of running, and design methods.

    2) User Studies: Utilize the gained knowledge to empathize with runners. Gain a comprehensive understanding of their learning process, the nuances of adapting their running technique, their sensory experiences during this process, and other relevant information.

    3) Lo-Fi Prototype: Employ appropriate design techniques to create a lo-fi drawing from insights obtained during the literature review and user studies.

    4) Develop Hi-Fi Prototype: Develop an interactive experience though a dashboard medium building upon the foundations laid by the low-fidelity prototyping stage.

    5) Assess Users Experiences using the Prototype: Identify relevant metrics and formulate questions related to self-reflection, self-modelling, and motor learning. Evaluate runners' experiences while utilizing the dashboard, gathering valuable insights.

    References/Reading Materials:

    1. Rhoads, Michael C., et al. "A meta-analysis of visual feedback for motor learning." Athletic insight 6.1 (2014): 17.

    2. Rymal, Amanda M., Rose Martini, and Diane M. Ste-Marie. "Self-regulatory processes employed during self-modeling: A qualitative analysis." The Sport Psychologist 24.1 (2010): 1-15.

    3. Groom, Ryan, and Lee Nelson. "The Application of Video-Based Performance Analysis in the Coaching Process 1: The coach supporting athlete learning." Routledge handbook of sports coaching. Routledge, 2013. 96-107.

    4. Trudel, P., and W. Gilbert. "Learning to coach through experience: Reflection in model youth sport coaches." Journal of teaching in physical education 21 (2001): 16-34.

    5. Bumblauskas, Daniel, et al. "Big data analytics: transforming data to action." Business Process Management Journal 23.3 (2017): 703-720.

    6. https://www.media.mit.edu/projects/illuminate/overview/

    7. https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/somaesthetics

    8. Hendriks, Sjoerd, et al. "Azalea: Co-experience in remote dialog through diminished reality and somaesthetic interaction design." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.

    9. Postma, Dees BW, et al. "A Design Space of Sports Interaction Technology." Foundations and Trends® in Human–Computer Interaction 15.2-3 (2022): 132-316.

    10. Soon to be published manuscripts will be made available later

    Supervisors:
    Dennis Reidsma: d.reidsma@utwente.nl
    Aswin Balasubramaniam: a.balasubramaniam@utwente.nl

  • Human-Robot Interaction: graduation topics related to human-robot interaction @ HMI - Enschede, NL

    Children’s Trust in Robot Speech
    Children struggle to communicate their information need when searching for information. The available systems (Google’s search engine, or voice assistants like Siri and Alexa) do not support the child in the process, but rather take one statement and provide search results in return. Chatters is creating a conversational agent that allows for multiple turns in the interaction to be able to find more relevant information. However, using a robot to search for information on the internet could also be hazardous, since children tend to trust robots and we do not know how this trust influences children’s perception of the provided information. Therefore, we aim to monitor the trust real-time during the interaction. When the robot notices that the trust is high, it should try to evoke a critical attitude in the child.

    Contact person:
    Khiet Truong
    k.p.truong@utwente.nl

    Spoken Interaction with Conversational Agents and Robots
    Speech technology for conversational agents and robots has taken a flight (e.g., Siri, Alexa), but we are not quite there yet. While there are technical challenges to address (e.g., how can an agent display listening behavior such as backchannels "uh-uhm", how can we recognize a user's stance/attitude/intent, how can we express intent without using words, how can an agent build rapport with a user), there are also more human-centered questions such as how to design such a spoken conversational interaction, how do people actually talk to an agent or robot, what effect does a certain agent/robot behavior (e.g., robot voice, appearance) have on a child’s perception and behavior in a specific context?

    These are some examples of research questions we are interested in. Much more is possible.

    Contact person:
    Khiet Truong
    k.p.truong@utwente.nl

    Teleoperation of Robots
    Teleoperated robots enable humans to be remotely present somewhere else in the world to perform (maintenance) tasks or be socially present. This has many applications and benefits, as operators can apply their expertise without the need to transport themselves over to a possibly remote or dangerous environment. If there are time delays in teleoperated systems, they become very difficult to use. In this project, you will work on making the time delays in these systems less noticeable through machine learning and/or through modelling the behaviour of the operator. This might for example involve computer vision/image segmentation but also doing user studies to determine how humans manipulate objects and how we visually orient ourselves in VR/remote environments.

    Contact person:
    Luc Schoot Uiterkamp
    l.schootuiterkamp@utwente.nl

    People Predicting Robot Behaviour
    Predictability in human-robot interactions is essential for humans to understand, for coordinating actions with robots, and improving task performance, safety, and trust in the robot. Moreover, it influences our social perception of the robot. When the interaction pattern is designed beforehand and is fixed, we can design it to be highly predictable. For instance, by having a few types of robot actions without variations. However, when the interaction pattern is not predetermined, unpredictable robot actions are more likely to happen. For instance, when robot motion trajectories are generated through kinematics. When people collaborating with the robot are unable to predict the robot's behaviour, they might no longer trust the robot or safely coordinate their actions with it. Example projects on this topic include developing novel strategies or behaviours to mitigate unpredicted robot behaviour, developing a model that a robot can use to predict what the person interacting with the robot is predicting the robot will do, investigating the effects of unpredictability on human-robot interaction, or how updating and changing robot behaviour (making it more unpredictable) influences human-robot collaboration.

    Contact person:
    Bob Schadenberg
    b.r.schadenberg@utwente.nl

    Human-robot communication for hospital robots
    Human-robot interaction is a dynamic and diverse interdisciplinary field that brings together the knowledge and expertise of various disciplines, including psychology, engineering, design, sociology, and philosophy, to improve the social interactions between robots and humans. If you are interested in researching novel approaches to enhance the interaction between robots and humans in a healthcare environment, investigating the societal impacts of robotic technologies, or delving into innovative methods for designing robotic communication (using semantic free utterances, motions, or the effect of appearance), you can reach out to:

    Contact person:
    Hideki Garcia Goo
    h.garciagoo@utwente.nl

  • Measuring conversation quality in customer-agent interactions @ HMI - Enschede, NL

    Together with Merlinq https://www.merlinq.net/, we are investigating how to measure conversation quality in customer-agent interactions. What makes a conversation a good conversation?

    Previous research has found that interlocutors find a conversation enjoyable or good when they are "in sync", when there is rapport and when interlocutors show empathy. Our goal is to make these indicators measurable in customer-agent interactions by looking at verbal and nonverbal aspects in speech (i.e., word usage, prosody), as well as by investigating the potential of physiological measures. Monitoring conversation quality can be useful for contact centers that aim to maximize customer satisfaction, as well as for training purposes. When conversation quality can be measured real-time, an intelligent agent can advice the customer service agent to take certain actions to "be more in sync" with the customer.

    We are looking for students who are interested in making conversation quality measurable. Possible assignments are diverse and can involve data collection and/or feature engineering of vocal and physiological parameters with the aim to measure conversation quality.

    More information can also be found here (sorry, in Dutch):  

    https://www.conversatiekwaliteit.nl/

    Contact:
    Dr. Khiet Truong k.p.truong@utwente.nl,
    Dr. Arjan van Hessen a.j.vanhessen@utwente.nl

  • Whirrs and Words: Discovering Robot's Voices @ HMI - Enschede, NL

    What do robots currently sound like? Despite the existence of robot databases containing images and videos of commercial and research robots (https://www.abotdatabase.info/collectionhttps://robotsguide.com) none of these include the sounds that these robots use to communicate. Having this information can further the design of robot sounds. Your task will be to collect sounds and other robot characteristics of existing robots in research, media, and industry. Using this collection you will be looking at possible relations between the robot’s voice and some of the robot’s characteristics to inform robot voice design.

    If you are interested in this topic feel free to contact Hideki Garcia Goo (h.garciagoo@utwente.nl) or Khiet Truong (k.p.truong@utwente.nl).

  • A rich research environment for multi-person rowing in VR @ HMI - Enschede, NL

    Interaction Technology has great potential for sports training and performance. For example, the Wavelight technology supports runners in matches; VR can be used to train in rowing or in soccer; amateur runners often make use of smart watches and sports trackers; and systems such as the FitLight are used for reaction training in various sports. In this project we work with the combination of Virtual Reality, rowing on ergometers, and various sensors to measure the rower's actions and performance.

    Context

    Recent studies in sports HCI illustrated that athletes and coaches use (and are open to) further use of virtual reality (VR) in training. The advantages of using VR in sports training can be immense, especially in skill development and coaching as it can simulate real life environments while being perfectly adaptable and systematically configurable. The proposed assignment is part of the "Rowing Reimagined" project, jointly carried out by the UT and the VU, in which a research platform is developed for multi-person rowing in VR using ergometers. On the one hand, this platform aims to offer a diversity of VR environments, tasks, and feedback for novel forms of training. On the other hand, the versatile setup can be used to systematically do fundamental research into the conditions and determinants of performance in rowing.

    For example, by introducing an opponent boat into the virtual reality, and systematically varying the parameters of its "overtaking behavior", we can fundamentally research the effects of stressors on rowing performance of the rowing team, or we can use the same opponent boat models for offering a novel settings in which to train athletes how to cope with such stressors. An unlimited amount of other variables could similarly be explored -- a more complete list can be found at the end of this assignment proposal.

    =This assignment=

    The proposed assignment focuses on realising the richest possible research environment for multi-person rowing in VR, by exploring, developing, and pilot testing multiple features in the platform (such as the above mentioned opponent boat) that can contribute to novel training technology or to fundamental research. This requires an iterative approach. For example, it is not enough to simply state that "an opponent boat model must be added to enable reseach into the role of stressors in rowing performance". What exactly should be the speed of overtaking? What gives the most realistically stressing effect on the athlete? At what distance of the athlete's boat does the stressor effect take place? Is this individually determined? Do we need a calibration phase to personalise the overtaking behaviour of the opponent boat to the current athlete using the system?

    Such questions must be explored on the basis of literature and expert input, and a specific design of the platform feature should iteratively be developed to see how it works out in practice. In the assignment, multiple platform features will be addressed one by one to make each fitting enough to contribute to future science. After a brief pilot study to show the potential of the new feature, a next feature will be taken up and worked out, based on a grounded idea of what feature will be useful for relevant research with the platform.

    =The current platform=

    The assignment starts with our existing platform which is still being extended. The current platform consists of a technological setup with two ergometers, a social VR setup in which two rowers can be virtually present in a single boat, a few initial measurement components to gather information about the power / effort of the rowers, and some initial virtual elements in the environment. Each of these components may be enhanced and extended as part of the assignment, or completely new components can be added.

    Inventory of some facets that may be chosen as part of this assignment

    - The "experience of realism" of the resulting rowing activity. Obviously, rowing on an ergometer in VR is not really the same as rowing in a boat on water. This "unreality" might have impact on the athletes’ transition of the improved skills from virtual to reality. What features would intuitively be considered important for the experience of realism? What do rowers feel is important for the experience of realism? When is a certain feature considered minimally adequate regarding realism? This may consider environmental factors, the movement of the boat, the representation of the other athlete in the setup, the sound, etcetera etcetera. 

    - Rich opponent behaviours that are realistic and meaningful, and that potentially impact the objective or subjective rowing activity. 

    - Measuring the sense of stress and tension during the rowing activity through a combination of objective (biophysiological?) sensors and in-action momentary self assessment. 

    - Measuring power, effort, and exertion (objectively or subjectively; posthoc or in-action).

    - Measuring and influencing sociality and perceived togetherness in rowing. In the future we would like to experiment with synchrony and the perception of togetherness in rowing. But how can we measure objective and subjective togetherness, eg in the form of social unity or flow, or objective measures of synchronisation? What features may contribute to, or detract from these measures? 

    - Multiple feedback mechanisms about the joint rowing and their possible parameters. Can be in many modalities: sound, haptic, etcetera. 

    - And many more.

    Contact:
    Dees Postma (d.b.w.postma@utwente.nl)
    Dennis Reidsma (d.reidsma@utwente.nl)
    Armağan Karahanoğlu (a.karahanoglu@utwente.nl)
    Robby van Delden (r.w.vandelden@utwente.nl)

  • Beep beep boop: Semantic-Free Utterances for Social Agents @ HMI - Enschede, NL

    In the field of human-robot interaction, the development of semantic-free utterances (SFU) has been gathering attention.R2D2 from Star Wars is a good example of an agent who uses SFU to communicate, another example is how Sims communicate in the videogame by the same name. Some advantages of using SFU over natural language are: the expected intelligence of the system decreases to a more realistic one, it could be more widely understood (not bound to any language), and the user becomes the intelligent other in the interaction (less weight on the robot's ability to process information). There are several topics that we can address in this area, but some research directions that we are interested in are:

    - The creation of SFU along with performers (improv, opera, theatre groups, DJs in UT). How does it compare to existing ones? How would the development of non-semantic speech change depending on who is designing it? How would an opera singer vocalize the emotions they need to convey? How would it be different from how an improv actor does it? 

    - While the point of using non-semantic speech in multicultural spaces is mentioned as an advantage, few studies have tested this with participants of different cultural backgrounds. How does culture play a part in how well we understand SFU? Are there any differences on how people of different cultural backgrounds design SFU? Does exposure to western culture influence how well we understand SFU (created in the Global North)?

    - The design of gender-neutral robots has also gained more traction, but gender-neutral voices that are perceived as such are difficult to design. Are SFU more easily perceived as gender neutral in comparison with natural language? Which type of SFU is more fit to convey gender neutrality? Will the addition of a robot embodiment change this perception?

    If you are interested in one of these (or similar) topics feel free to contact Hideki Garcia Goo (h.garciagoo@utwente.nl) or Khiet Truong (k.p.truong@utwente.nl).

  • Accessing large-scale knowledge graphs via conversations in virtual environments @ HMI - Enschede, NL

    In the past decades, more and more Cultural Heritage institutions, such as libraries, museums, galleries and archives, have launched large-scale digitisation processes that result in massive digital collections. This not only ensures the long-term preservation of cultural artefacts in their digital form, but also allows online instant access to the resources that otherwise require physical presence and foster the development of applications like virtual exhibitions and online museums. By embracing Linked Open Data (LOD) principles and Knowledge Graph (KG) technologies, rich legacy knowledge in these CH collections has transformed into a form that is shareable, extensible and easily re-usable. Relevant entities (e.g. people, places, events), their attributes and their relationships are formally represented using international standards, resulting in knowledge graphs that both humans and machines are able to understand and reason about. 

    However, exploring large-scale knowledge graphs is not trivial. Traditional keywords based search is not ideal for exploring such graph-structured data. A museum visitor may start his/her exploration from certain creative work to different types of entities that it associates with, such as its creator, the place where it was created, or relevant events that happened at the time when it was created, etc. The visitor can follow the links in the knowledge graph to discover what is interesting to them. Thanks to the LOD principles, some external knowledge graphs may also be briefly accessed. What is challenging is how to make this huge amount of information accessible to visitors in an appealing and intuitive manner, so that the interaction between visitors and the knowledge graphs becomes meaningful and enjoyable. Such interaction needs to take into account the visitors' cognitive and information processing capabilities as well as their personal interests and cultural backgrounds.  

    At HMI, we are investigating how to access large-scale KGs via natural language conversations in virtual environments and welcome students to research on different aspects of this research:

    • Developing a KG-based conversational museum guide that models user's interests, introduces art objects, answers questions, provides recommendations, etc.
    • User interest modelling Conversational KG-based question answering and recommendations Natural language generation with dynamic subgraph extraction Mixed initiative dialog management with KGs
    • Integrating conversational agent in a virtual reality environment
    • Multi-model input for user interest detection, including the user's utterances, eye-gaze, speech emotion, gesture, etc. Multi-model responses in virtual reality, including text, voice, highlights, etc.
    • Effective and ethical design in collaboration with Humanities researchers and/or Cultural Heritage institutions.
    • Effective or inappropriate: Using visitors’ (cultural?) background / profile to generate personalised narratives Methods for visitor-agent interaction to allow for collaborative creation of narratives about cultural history → this could include research on community-driven artifact labelling / label correction, object selection etc. Research on increasing affective impact of interactive virtual environments (for cultural heritage), with the goal of 
      • increasing visitors’ a) knowledge, b) sense of responsibility, c) sustained interest d) other positive outcome on topic of inclusivity
      • Increasing interest and participation of different groups of people that may not typically feel attracted or included in museum exhibitions - e.g. children

    Contact: Shenghui Wang (shenghui.wang@utwente.nl)

  • Can I touch you online? - Investigating the interactive touch experience of the “Touch my Touch” art installation @ HMI - Enschede, NL

    We touch the screen of our cell phone more often than we touch our friends. We stroke and 'swipe' our screen in search of a loved one. Meanwhile, in public spaces, we touch each other, and watch each other being touched less and less. Pandemic regulations have only further increased this physical isolation. “Touch my touch” or TouchMyTouch.net, designed by artist duo Lancel and Maat, is a critical new composition of face recognition, merging and streaming technologies for a poetic encounter through touching and being touched. TouchMyTouch.net is a Streaming Platform for Online Touch and Interactive installation for the streaming platform. The interactive installation will be at the UT for a number of weeks during the second semester of 2022. You can try the online platform with a partner here: https://upprojects.com/projects/touch-my-touch

    This master assignment is in relation to the physical “touch my touch” installation. You will define a research question in relation to the touch experience evoked by the art installation. For more information send an email to Judith Weda (j.weda@utwente.nl).

  • Coaching for breathing with haptic stimulation @ HMI - Enschede, NL

    Breathing has a fundamental contribution to well-being, both physiologically and psychologically. Accordingly, also a number of techniques for well-being build on breathing-techniques, like yoga, TaiChi, meditation, Wim Hof method, and many more. There are also a number of technological products available that offer support for breathing, such as the Spire or apps in smart watches.

    With this master project we want to explore the possibilities to support breathing by haptic stimulation and feedback. Stimulation can be used to teach breathing patterns, feedback to signal whether breathing is or is not in the intended range. For this work we will focus on vibration motors for haptic stimulation. Relevant questions to answer here are: where to position vibration motors, what stimulation and feedback patterns are comfortable and effective?

    Contact: Angelika Mader, a.h.mader@utwente.nl

    Supervisors: Angelika Mader, Geke Ludden

  • Mediated social touch through a physical avatar and a wearable @ HMI - Enschede, NL

    The aim of this master project is to develop a fully functioning social haptic device divided in two main components, a haptic coating for a humanoid robot serving as an avatar and a wearable haptic vest/suit. This project’s workload will be divided in four parts: (1) the design and development of a social touch coat for the robot body, (2) the selection or development of haptic sensors to be dispatched on the avatar’s upper body, (3) designing and developing a pneumatic haptic vest that can be used for both perception experiments and mediated touch and (4) testing and experimenting with a TacticSuit haptic vest (from BHaptics [vibration]). You will find more information on each part below. We intend to have one student working on each part. We support and encourage collaboration between students within the project.

    Xprize
    The ANA Avatar Xprize is an International Competition where different teams design and develop their own fully functioning avatar and test it in different scenarios from maintenance tasks to human-human remote interactions. HMI participates in team i-Botics that qualified for the semifinals in September 2021. During this competition, we develop our avatar’s ability to allow advanced social remote interactions between the remote controller of the avatar and the recipient at the avatar location. To that end, this master project has been designed to focus on one important aspect of social interaction, which is touch.

    You can find more information on https://avatar.xprize.org/prizes/avatar

    WEAFING
    The WEAFING project is an EU Horizons 2020 project that aims to develop a wearable for social touch made out of electroactive textile. Electroactive textile is a textile woven or knitted with electroactive yarn. This yarn contracts or expand when an electrical current is applied. Depending on the morphology of the textile we can imagine different types of haptic sensations on the skin. The current interest is the pressure sensation that the garment could generate.

    At the UT we do perception research which is key in order to define the specifications of the electroactive textile. Since the textile is still in development we use substitute materials to explore and find the perception parameters of pressure applied to different parts of the body by psychophysical studies.

    You can find more information on weafing.eu

    Part 1 : Designing a social touch skin for a humanoid robot avatar

    This part of the project concerns the design, production and test of a humanoid robot avatar’s “skin” to be used during social touch interaction between the avatar (piloted by a controller) and a recipient who is actually touching the robot. For this part, we expect the student to realize a study on different materials and sensors that could be used for the coating, to design the required product and to make a test with a physical robot.

    There are no clear limitations on the selection of the material. However, some criteria of measurement will be clearly emphasized during the project. We will offer help and collaboration in the search for an adequate material. For the sensors, the selected method should be dispatched on the whole upper body of the robot and should be as less intrusive as possible. There may be multiple way to approach this task. We expect the student to find a viable and efficient solution considering the limitations that will be provided during the project such as weight, shape or size of the sensors. Some ways may also be available at the HMI department such as the CapSense vest as a start for the investigation.

    We are looking for master students in interaction technology and/or embedded systems. Help will be provided with sewing and designing the “skin”. However, we will consider experiences with sewing, haptic interaction and sensor’s data analysis as a plus.

    For more information, please contact Camille Sallaberry, c.sallaberry@utwente.nl

    Part 2 : Developing a pneumatic haptic vest for human-human remote touch interaction and psychophysical experiments

    For this graduation assignment we are looking for a student to create a haptic (touch) vest with pneumatic actuators. The goal is to use the vest for both psychophysical experiments and mediated touch. The assignment would be a collaboration between two projects namely the WEAFING (weafing.eu) project and the UT entry for the X-prize.

    In the weafing project we are developing a textile wearable that can give haptic feedback. In order to do this we need to do perception experiments for pressure on the skin using psychophysical methods to find the parameters of touch perception. A pneumatic vest can help us with these experiments.

    Following these experiments we can use the vest for mediated touch applications. This includes mediated social touch and other mediated touch. Touch can for example be mediated through an avatar representing you at an alternative location, the use case of the X-prize project.

    There are multiple ways to approach the assignment and multiple actuator option like McKibben muscles or silicone pockets. It is key for psychophysical experiments to measure pressure in the actuator and have precise control of pressure in the actuator. The vest should fit both men and women and some variation of body types.

    We are looking for master students in interaction technology or embedded systems. We will offer help with sewing and the vest design, but it is key that you have affinity with making. Experience with sewing is a plus.

    There is a body of work as a base, namely a project on pneumatic actuator control, a sleeve with McKibben muscles, and silicon actuators.

    For more information please contact Judith Weda, j.weda@utwente.nl

    Part 3 : Vibratory Suit for human-human remote touch interaction

    For this part of the project, we are looking for a student that will investigate the use and flexibility of vibratory motors to reproduce different kind of social contact/touch during social communication in the context of an interaction between a robot-avatar and a human. The student will also be expected to evaluate existing vibratory haptic suits or to develop one for remote social touch experience.

    The experience will require the availability to touch the whole upper body of the avatar with exception to the hands. Thus, we are expecting the haptic suit to be designed with only the top and long sleeves. Some criteria of measurement will also be clearly emphasized during the project.

    As one of our basic consideration, the student may start to evaluate with the TacticSuit from BHaptics. The aim of the student should be to test the haptic vest/suit for social touch and determine its usability compared to other possible suits.  

    During the project, we also encourage the student to tightly collaborate with the student on the “pneumatic vest” project as both students may have to evaluate the social touch experience of both products.

    We are looking for master students in interaction technology or embedded systems. We will consider experience with vibratory actuators and experience in social haptics as a plus.

    For more information, please contact Camille Sallaberry, c.sallaberry@utwente.nl

  • Spoken Interaction with Conversational Agents and Robots @ HMI - Enschede, NL

    Speech technology for conversational agents and robots has taken a flight (e.g., Siri, Alexa), but we are not quite there yet. While there are technical challenges to address (e.g., how can an agent display listening behavior such as backchannels "uh-uhm", how can we recognize a user's stance/attitude/intent, how can we express intent without using words, how can an agent build rapport with a user), there are also more human-centered questions such as how to design such a spoken conversational interaction, how do people actually talk to an agent or robot, what effect does a certain agent/robot behavior (e.g., robot voice, appearance) have on a child’s perception and behavior in a specific context?

    These are some examples of research questions we are interested in. Much more is possible. Are you also interested? Khiet Truong and Ella Velner can tell you more.

     Contact: Khiet Truong, k.p.truong@utwente.nl

  • Automatic Laughter analysis in Human-Computer Interaction @ HMI - Enschede, NL

    Laughter analysis is currently a hot topic in Human-Computer Interaction. Computer scientists generally study how humans communicate through laughter and how this can be implemented in Automatic Laughter Detection and Automatic Laughter Synthesis. Development of such tools would be very helpful in fields like Human-Computer/Robot Interaction, where voice assistants like Alexa and Google assistant might understand more complex natural communication through the interpretation of social signals such as laughter and generating well timed and appropriate realistic laughter responses. Another application could be multi-media laughter retrieval, automatically extracting laughter occurrences from large amounts of video and audio data, opening the way for retrieving large laughter datasets. As a final example, laughter detection could also possibly be used to study group behavior or automatic person identification.

    However there are several challenges in laughter research that need to be considered when aiming for automatic laughter analysis. For one, annotating laughter is a much discussed challenge for the field, there are debates on how laughter should be segmented and labeled. Are there different kinds of laughs for different situations and how do we label these? Do people have specific laughter profiles? How does context play a role in laughter detection? How could a real-time implementation in a conversational agent look like and for what purpose? Students can choose to go for a more human-centered or technology oriented direction.

    This makes achieving automatic laugher analysis an interesting goal. Students are invited to explore the topic of Automatic Laughter analysis and come up with an interesting question or challenge they want to address. You will be supervised by assistant professor Khiet P. Truong, who is an expert in laughter research and SSP and PhD student Michel-Pierre Jansen whose PhD work evolves around human laughter recognition and SSP.

     Contact: Khiet Truong, k.p.truong@utwente.nl

  • Analysis of depression in speech @ HMI - Enschede, NL in cooperation with GGNet Apeldoorn, NL

    The NESDO research (Nederlandse Studie naar Depressie bij Ouderen) was a large longitudinal Dutch study into depression in older adults (>60yr old). Older adults with and without depression were followed over a period of six years. Measurements included questionnaires, a medical examination, cognitive tests and information was gathered about mental health outcomes, demographic, psychosocial and cognitive determinants. Some of these measurements were taken in face-to-face assessments. After baseline measurement, face-to-face assessments were held after 2 and 6 years.

    Currently, we have a few audio recordings available from the 6-yrs measurement from depressed and non-depressed older persons. We are looking for a student (who preferably has knowledge of Dutch) who is interested in performing speech analyses on these recordings with the eventual goal to detect depression automatically in speech.

    This work will be carried out in collaboration with Dr. Paul Naarding, GGNet Apeldoorn.

    Contact: Khiet Truong, k.p.truong@utwente.nl

    Reading material:
    - https://nesdo.onderzoek.io/

    - https://nesdo.onderzoek.io/wp-content/uploads/2016/08/Comijs-et-al-2011_design-NESDO_incl-erratum.pdf

    - Cummins, N., Scherer, S., Krajewski, J., Schnieder, S., Epps, J., & Quatieri, T. F. (2015). A review of depression and suicide risk assessment using speech analysis. Speech Communication, 71, 10-49.

    - Low, D. M., Bentley, K. H., & Ghosh, S. S. (2020). Automated assessment of psychiatric disorders using speech: A systematic review. Laryngoscope Investigative Otolaryngology, 5(1), 96-116.

    - Cummins, N., Matcham, F., Klapper, J., & Schuller, B. (2020). Artificial intelligence to aid the detection of mood disorders. In Artificial Intelligence in Precision Health (pp. 231-255). Academic Press.

  • Master’s Assignments: Design of Smart Objects for Hand Rehabilitation after Stroke @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL

    Stroke impacts many people and is one of the leading causes of death and disability worldwide [1]–[3] and in the Netherlands [4], [5]. The predicted acceleration of the ageing population is expected to raise the absolute numbers of stroke survivors that need care [7]. 80% of all stroke patients suffers from function loss and needs professional caregivers [8], [9] and experiences lower quality of life due to their limited ability to participate in social activities, work and engage in daily activities [10], [11].

    The hand is the highly functional endpoint of the human arm as it enables a vast variety of daily activities related to high quality of life [12]. Only 12% of stroke patients recovers arm and hand function in the first 6 months [13]. For the remaining, the limited ability to use their hand leads to financial and psychological impact on them and their families, as it limits the execution of daily activities [14]. A treatment with substantial evidence for its effectiveness is CIMT (Constrained Induced Movement Therapy) [15]. CIMT usually employs intensive sessions focused on task-specific exercises, combined with constraining the unaffected hand and forcing patients to use their affected hand. CIMT relies on the principle of ‘use it or lose it’ [16] and requires patients using their affected hand.

    So far, attempts in creating effective home training methods focus on the direct translation of clinical exercises to home training, by designing them to be executed regardless of the location of the patient [17]. Monitoring with the use of smart objects [18]–[21] accounts for the lack of direct supervision and gaming and virtual reality elements have been added to make training more challenging [22]. Such methods assume that patients are motivated, able and willing to clear time in their schedule to engage in training, and/or to sit down at a specific location in their house to execute it. We need a new method to apply this principle in a more flexible way by engaging people in clinically meaningful activities in their daily routine. This way, patients will seamlessly perform functional training activities at a much higher dose than can be achieved in clinics.

    Our key objective is to develop a new method using smart objects in which training exercises will be seamlessly integrated into the daily routine of a patient at home.

    This method will aim to use the performance on these activities as a functional training set over the day, leading to improved hand function and therefore motivation to perform the activities again in the future [23]–[25]. Patients will not have to schedule their training, but the exercise will be part of their regular daily activities. We will do this by investigating a way of transferring clinical exercises to a home setting using smart objects. Smart objects can be integrated into the daily activities of patients and trigger (by design) a certain user behaviour. The focus in our proposal is for these objects to go beyond simply monitoring [18]–[21], and create a stimulating environment where people feel invited to train and intrinsically motivated to perform the task again in the future. Think of a smart toothbrush, that is designed to promote the use of the affected hand and enables operation only when used by this hand! Fundamental research into the transferability of clinical hand rehabilitation to a smart object home-based setting is needed to theoretically underpin our method. Using smart objects and artificial intelligence, personalized health will be more accessible, and the plurality of data will allow future clinicians more flexibility and overall control of the rehabilitation process.

    In this assignment, the masters’ student is expected to:

    1. Review literature on existing technologies (sensors, actuators, AI, etc.) of smart objects for rehabilitation to identify gaps/opportunities

    2. Specify the requirements for design of smart daily objects that can drive seamless rehabilitation with the use of technology

    3. Design and validate a product concept in a co-design manner including clinicians, users and developers

    What do we offer?

    We offer an interdisciplinary network of researchers who among others are experienced in hand rehabilitation and rehabilitation technology (dr. ir. Kostas Nizamis-DPM), Artificial Intelligence, smart technology and stroke rehabilitation (dr. ir. Juliet A.M. Haarman-HMI), and additionally behaviour change and design research (dr. Armağan Karahanoğlu-IxD). Additionally, the student will collaborate close with clinicians from Roessingh Research & Development (RRD) that aspire to be the end users of the product.

    Bibliography

    [1] S. S. Virani et al., “Heart Disease and Stroke Statistics—2020 Update,” Circulation, vol. 141, no. 9, Mar. 2020.

    [2] S. Sennfält, B. Norrving, J. Petersson, and T. Ullberg, “Long-Term Survival and Function After Stroke,” Stroke, 2019.

    [3] E. R. Coleman et al., “Early Rehabilitation After Stroke: a Narrative Review,” Current Atherosclerosis Reports. 2017.

    [4] “StatLine.” [Online]. Available: https://opendata.cbs.nl/statline/#/CBS/en/. [Accessed: 21-Apr-2020].

    [5] C. M. Koolhaas et al., “Physical activity and cause-specific mortality: The Rotterdam study,” Int. J. Epidemiol., 2018.

    [6] R. Waziry et al., “Time Trends in Survival Following First Hemorrhagic or Ischemic Stroke Between 1991 and 2015 in the Rotterdam Study,” Stroke, 2020.

    [7] A. G. Thrift et al., “Global stroke statistics,” International Journal of Stroke. 2017.

    [8] W. Pont et al., “Caregiver burden after stroke: changes over time?,” Disabil. Rehabil., 2020.

    [9] P. Langhorne, F. Coupar, and A. Pollock, “Motor recovery after stroke: a systematic review,” The Lancet Neurology. 2009.

    [10] M. J. M. Ramos-Lima, I. de C. Brasileiro, T. L. de Lima, and P. Braga-Neto, “Quality of life after stroke: Impact of clinical and sociodemographic factors,” Clinics, 2018.

    [11] Q. Chen, C. Cao, L. Gong, and Y. Zhang, “Health related quality of life in stroke patients and risk factors associated with patients for return to work,” Medicine (Baltimore)., vol. 98, no. 16, p. e15130, Apr. 2019.

    [12] R. Morris and I. Q. Whishaw, “Arm and hand movement: Current knowledge and future perspective,” Frontiers in Neurology, vol. 6, no. FEB, 2015.

    [13] G. Kwakkel, B. J. Kollen, J. V. Van der Grond, and A. J. H. Prevo, “Probability of regaining dexterity in the flaccid upper limb: Impact of severity of paresis and time since onset in acute stroke,” Stroke, 2003.

    [14] J. E. Harris and J. J. Eng, “Paretic Upper-Limb Strength Best Explains Arm Activity in People With Stroke,” Phys. Ther., 2007.

    [15] G. Kwakkel, J. M. Veerbeek, E. E. H. van Wegen, and S. L. Wolf, “Constraint-induced movement therapy after stroke,” The Lancet Neurology. 2015.

    [16] Y. Hidaka, C. E. Han, S. L. Wolf, C. J. Winstein, and N. Schweighofer, “Use it and improve it or lose it: Interactions between arm function and use in humans post-stroke,” PLoS Comput. Biol., vol. 8, no. 2, 2012.

    [17] Y. Levanon, “The advantages and disadvantages of using high technology in hand rehabilitation,” Journal of Hand Therapy. 2013.

    [18] M. Bobin, M. Boukallel, M. Anastassova, M. Ammi, U. Paris-saclay, and F.- Orsay, “Smart objects for upper limb monitoring of stroke patients during rehabilitation sessions .,” no. August 2018, 2017.

    [19] M. Bobin, F. Bimbard, and M. Boukallel, “Smart Health SpECTRUM : Smart ECosystem for sTRoke patient ’ s Upper limbs Monitoring,” Smart Heal., vol. 13, p. 100066, 2019.

    [20] M. Bobin, H. Amroun, M. Boukalle, M. Anastassova, and M. A. Limsi-cnrs, “Smart Cup to Monitor Stroke Patients Activities during Everyday Life,” 2018 IEEE Int. Conf. Internet Things IEEE Green Comput. Commun. IEEE Cyber, Phys. Soc. Comput. IEEE Smart Data, pp. 189–195, 2018.

    [21] G. Yang, J. I. A. Deng, G. Pang, H. A. O. Zhang, and J. Li, “An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning,” IEEE J. Transl. Eng. Heal. Med., vol. 6, no. May, pp. 1–10, 2018.

    [22] L. Pesonen, L. Otieno, L. Ezema, and D. Benewaa, “Virtual Reality in rehabilitation : a user perspective,” pp. 1–8, 2017.

    [23] A. L. Van Ommeren et al., “The Effect of Prolonged Use of a Wearable Soft-Robotic Glove Post Stroke - A Proof-of-Principle,” in Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, 2018.

    [24] G. B. Prange-Lasonder, B. Radder, A. I. R. Kottink, A. Melendez-Calderon, J. H. Buurke, and J. S. Rietman, “Applying a soft-robotic glove as assistive device and training tool with games to support hand function after stroke: Preliminary results on feasibility and potential clinical impact,” in IEEE International Conference on Rehabilitation Robotics, 2017.

    [25] B. Radder, “The Wearable Hand Robot: Supporting Impaired Hand Function in Activities of Daily Living and Rehabilitation,” University of Twente, Enschede, 2018.

  • CHATBOTS FOR HEALTHCARE – THE eCG FAMILY CLINIC @ HMI - Enschede, NL in cooperation with UMCU - Utrecht, NL

    In collaboration with Universitair Medisch Centrum Utrecht we will design and develop the eCG family clinic: the electronic Cardiovascular Genetic family clinic to facilitate genetic screening in family members. In inherited cardiovascular diseases, first-degree relatives are at 50% risk of inheriting the disease-causing mutation. For these diseases, preventive measures and treatment options are readily available and effective. Relatives may undergo predictive DNA testing to find out whether they carry the mutation. More than half of at-risk relatives do not attend genetic counselling and/or cardiac evaluation.

    In order to increase the group of people that will attend the genetic counseling and/or cardiac evaluation the eCG family clinic will be developed. eCG Family Clinic is an online platform where family members are provided with general information (e.g. on the specific family disease, mode of inheritance, pros and cons of genetic testing and the testing procedure). The users of the platform will be able to interact with a chatbot.

    Within this research project we have student assignments available such as:
    ·       Designing and developing of a chatbot and its functions and roles within the platform
    ·       Translating current treatment protocols into prototypes of the chatbot
    ·       Evaluating user experience and user satisfaction

    We are open for alternative assignments or perspectives on the example assignments above.

    Contact person: Randy Klaassen, r.klaassen@utwente.nl

  • Touch Interactions and Haptics @ HMI - Enschede, NL

    In daily life, we use our sense of touch to interact with the world and everything in it. Yet, in Human-Computer Interaction, the sense of touch is somewhat underexposed; in particular when compared with the visual and auditory modalities. To advance the use of our sense of touch in HCI, we have defined three broad themes in which several assignments (Capita Selecta, Research Topics, Graduation Projects) can be defined. 

    Designing haptic interfaces

    Many devices use basic vibration motors to provide feedback. While such motors are easy to work with and sufficient for certain applications, the advances in current manufacturing technologies (e.g. 3D printing) and in electronics provide opportunities for creating new forms of haptic feedback. Innovative forms of haptic feedback may even open up complete new application domains. The challenge for the students is twofold: 1.) Exploring the opportunities and limitations of (combinations of) materials, textures, and (self-made) actuators, and 2.) coming up with potential use cases.

    Multimodal perception of touch

    The experience of haptic feedback may not only be governed by what is sensed through the skin, but may also be influenced by other modalities; in particular by the visual modality. VR and AR technologies are prime candidates for studying touch perception, and haptic feedback is even considered ‘the holy grail’ for VR. Questions surrounding for instance body ownership in VR, or visuo-haptic illusions in VR (e.g. elongated arms, a third arm) can be interesting starting points for developing valuable multimodal experiences, and for studying the multimodal perception of touch.

    Touch as a social cue

    Research in psychology has shown that social touch (i.e. being touched by another person) can profoundly influence both the toucher and the recipient of a touch (e.g. decreasing stress, motivating, or showing affect). Current technologies for remote communication could potentially be enriched by adding haptic technology that allows for social touch interactions to take place over a distance. In addition, with social robots becoming more commonplace in both research and everyday life, the question arises how we should engage in social touch with such social robots in a beneficial, appropriate and safe manner. Applications of social touch technology can range from applications related to training and coaching, to entertainment, and to providing care and intimacy. Potential projects in this domain could focus on the development of new forms of social touch technology (interactions), and/or on the empirical investigation of the effects such artificial social touch interactions can have on people.

    Contact: Dirk Heylen, d.k.j.heylen@utwente.nl

  • Wearables and tangibles assisting young adults with autism in independent living @ IDE - Enschede, NL

    In this project we seek socially capable and technically smart students with interest in technology and health care to investigate how physical-digital technology may support young adults with autism (age 17-22) developing independence in daily living. In this project we build  further on insights from earlier projects such as Dynamic Balance and MyDayLight.

    (see more about both projects here: http://www.jellevandijk.org/embodiedempowerment/ )

    Your assignment is to engage in participatory design in order to conceptualize, prototype and evaluate a new assistive product concept, together with young adults with autism, their parents, and health professionals. You can focus more on the design of concepts, the prototyping of concepts, technological work on building an adaptive flexible platform that can be personalized by each individual user, or working on developing the ‘co-design’ methods we use with young adults with autism, their parents, and the care professionals.

    As a starting point we consider opportunities of wearables with bio-sensing in combination with ambient intelligent objects (internet-of-things e.g. interactive light, ambient audio) in the home.

    The project forms part of a research collaboration with Karakter, a large youth psychiatric health organization and various related organizations, who will provide participating families. One goal is to present a proof-of-concept of a promising assistive device – another goal is to explore the most suitable participatory design methods in this use context. Depending on your interests you can focus more on the product or on the method. The ultimate goal of the overall research project is to realize a flexible, adaptive interactive platform that can be tailored to the needs of each individual user – this master project is a first step into that direction.

    Contact: jelle.vandijk@utwente.nl

  • Interactive Surfaces and Tangibles for Creative Storytelling @ HMI - Enschede, NL

    In the research project coBOTnity a collection of affordable robots (called surface-bots) was developed for use in collaborative creative storytelling. Surfacebots are moving tablets embodying a virtual character. Using a moving tablet allows us to show a digital representation of the character’s facial expressions and intentions on screen while also allowing it to move around in a physical play area. 

    The surfacebots offer diverse student assignment opportunities in the form of Capita Selecta, HMI project, BSc Research or Design project, or MSc thesis research. These assignments can deal with technology development aspects, empirical studies evaluating the effectiveness of some existing component, or a balance of both types of work (technology development + evaluation).

    Just as a sample of what could be done in these assignments, but not limited to, students could be interested in developing new AI for the surfacebot to become more intelligent and responsive in the interactive space, studying interactive storytelling with surfacebots, developing mechanisms to orchestrate multiple surfacebots as an expression means (e.g. to tell a story), evaluating strategies to make the use of surfacebots more effective, developing and evaluating an application to support users’ creativity/learning, etc.

    You can find more information about the coBOTnity project at: https://www.utwente.nl/ewi/hmi/cobotnity/

    Contact: Mariët Theune (m.theune@utwente.nl)

  • Sports, Data, and Interaction: Interaction Technology for Digital-Physical Sports Training @ HMI - Enschede, NL

    The proposed project focuses on new forms of (volleyball and other) sports training. Athletes perform training exercises in a “smart sports hall” that provides high quality video display across the surface of the playing field and has unobtrusive pressure sensors embedded in the floor, or using smart sports setups such as immersive VR with a rowing machine. A digital-physical training system offers tailored, interactive exercise activities. Exercises incorporate visual feedback from the trainer as well as feedback given by the system. They can be tailored through a combination of selection of the most fitting exercises and setting the right parameters. This allows the exercises to be adapted in real time in response to the team’s behaviour and performance, and to be selected and parameterized fitting to their levels of competition and to demands of, e.g., youth sport. To this end, expertise from the domains of embodied gaming and instruction and pedagogy in sports training are combined. Computational models are developed for the automatic management of personalization and adaptation; initial validation of such models is done by repeatedly evaluating versions of the system with athletes of various levels. We collect, and automatically analyse, data from the sensors to build continuous models of the behaviour of individual athletes as well as the team. Based on this data, the trainer or system can instantly decide to change the ongoing exercises, or provide visual feedback to the team via the displays and other modalities. In extrapolation, we foresee future development towards higher competition performance for teams, by building upon the basic principles and systems developed in this project. 

    Assignments in this project can be done on user studies, automatic behaviour detection from sensors, novel interactive exercise design, and any other topic.

    Contact person: Dees Postma, d.b.w.postma@utwente.nl, Dennis Reidsma, d.reidsma@utwente.nl

  • Dialogue and Natural Language Understanding & Generation for Social and Creative Applications @ HMI - Enschede, NL

    Applications involving the processing and generation of human language have  become increasingly better and more popular in recent years; think for example of automatic translation and summarization, or of the virtual assistants that are becoming a part of everyday life. However, dealing with the social and creative aspects of human language is still challenging. We can ask our virtual assistant to check the weather, set an alarm or play some music, but we cannot have a meaningful conversation with it about what we want to do with our life. We can feed systems with big data to automatically generate texts such as business reports, but generating an interesting and suspenseful novel is another story.

    At HMI we are generally open to supervising different kinds of assignments in the area of dialogue and natural language understanding & generation, but we are specifically interested in research aimed at social and creative applications. Some possible assignment topics are given below.

    Conversational agents and social chatbots. The interaction with most current virtual assistants and chatbots (or 'conversational agents') is limited to giving them commands and asking questions. What we want is to develop agents you can have an actual conversation with, and that are interesting to engage with. An important question here is, how can we keep the interaction interesting over a longer period of time? Assignments in this area can include question generation for dialogue (so the agent can show some interest in what you are telling them), story generation for dialogue (so the agent make a relevant contribution to the current conversation topic) and user modeling via dialogue (so the agent can get to know you). The overall goal is to create (virtual) agents that show verbal social behaviours. In the case of embodied agents, such as robots or virtual characters, we are also interested in the accompanying non-verbal social behaviours.

    Affective language processing or generation. Emotions are part of everyday language, but detecting emotions in a text, or having the computer produce emotional language, are still challenging tasks. Assignments in this area include sentiment analysis in texts, for Dutch in particular, and generating emotional language in for example the context of games (emotional character dialogue or 'flavor text' as explained above) or in the context of automatically generated soccer reports.

    Creative language generation. Here we can think of generating creative language such as puns, jokes, and metaphors but also stories. It is already possible to generate reports from data (for example sports or game-play data) but such reports tend to be boring and factual. How can we give them a more narrative quality with a nice flow, atmosphere, emotions and maybe even some suspense? Instead of generating non-fiction based on real-world data, another area is generating fiction. An example is generating so-called 'flavor text' for use in games. This is text that is not essential to the main game narrative, but creates a feeling of immersion for the player, such as fictional newspaper articles and headlines or fake social media messages related to the game. Another example of fiction generation is the generation of novel-length stories. Here an important challenge is how to keep the story coherent, which is a lot more difficult for long texts than for short ones.

    Contact: Mariët Theune (m.theune@utwente.nl)

  • Group activity detection @ HMI - Enschede, NL

    Building social interaction is necessary for both mental and physical health. Participating in group activities encourage social interaction. While there is the opportunity for attending a variety of different group activities, some prefer to do solitary activities. This project aims to design an algorithm to extract the pattern of group and solitary activities from GPS (Global Positioning System) and motion sensors including accelerometer, gyroscope, and magnetometer. The extracted pattern would able us to detect whether the individual is involved in a group or solitary activity.

    This project is defined within a larger project, namely the Schoolyard project. In the schoolyard project, we captured data from pupils in the school playground during the break via GPS and motion sensors. The collected data will be used to validate the designed algorithm. You need to be creative in designing the method to cover different types of group activities in the playground including parallel games (e.g., swings), ball games (e.g., football), tag games (e.g., catch and run), etc.

    The research involves steps such as:

    ·        Literature review 

    ·        Data preparation and identifying benchmark datasets

    ·        Designing an algorithm to identify the group activity patterns

    ·        Validate the result via ground truth/simulated data/benchmark datasets

    We are looking for candidates that match the following profile:

    ·        Having a creative mindset 

    ·        Strong programming skills in python 

     

    Many recent studies have focused on detecting group activities from videos. However, using videos to detect the activity is computationally expensive and has a high privacy concern. Below is a related paper to this topic which used the motion sensor with beacons to identify the group activity. 

    https://www.sciencedirect.com/science/article/pii/S0360132319303348

    For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: m.baratchi@liacs.leidenuniv.nl.

    You will be jointly supervised by Dr. Gwenn Englebienne, Assistant Professor, University of Twente, and with the external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.

  • A Framework for Longitudinal Influence Measurement between Spatial Features and Social Networks @ HMI - Enschede, NL

    The features of the environment may enhance or discourage social interactions among people. The question is how environmental features influence social participation and how the influence may vary over time. To answer this question, you need to design a framework that combines features of the spatial network with the parameters of the social network while addressing the longitudinal characteristics of such a combination.

    To the best of our knowledge, no study has been conducted on analyzing the longitudinal influence between social networks and spatial features of the environment.

    This project is defined within a larger project, namely the schoolyard project. In the schoolyard project, we observed the behavior of the children in the playground using RFID tags and GPS loggers. The RFIDs are used to build a social network. The longitudinal influence between the social network and spatial features may be analyzed in three stages: 1) before the renovation 2) after the renovation 3) after adaptation of the playground. The collected data can be used to validate the designed framework.

    We are looking for candidates that match the following profile:

    ·        Knowledge about network analysis

    ·        Knowledge about multilevel time series analysis

    ·        Strong programming skills in python 

     

    This paper presents a general framework for measuring the dynamic bidirectional influence between communication content and social networks. They used a publication database to make the social network and its relationship with the concept of connection is studied longitudinally. 

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.208.4144&rep=rep1&type=pdf

     

    For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: m.baratchi@liacs.leidenuniv.nl

    You will be jointly supervised by Dr. Shenghui Wang, Assistant Professor, University of Twente, and with external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.

  • Human behaviour modelling for Teleoperated Robots @ HMI - Enschede, NL

    Description

    Teleoperated robots enable humans to be remotely present in the world to perform (maintenance) tasks or be socially present. This has many applications and benefits, as operators can apply their expertise without the need to travel to a possibly remote or dangerous environment. If there are time delays in teleoperated systems, for example because of networking issues or physical distance, they become very difficult to use. There are multiple strategies which can be employed to deal with these difficulties. These approaches require the interpretation of how humans interact with the environment, and in this field you can do your study.

    Some examples of specific assignments you can do are

    1. Compare image segmentation (neural network) models to identify how to interact with the environment (eg. how do I grasp a mug differently from a pen) and what surroundings we are dealing with. This involves a theoretical comparison with (optionally) a practical component in which several models can be tested and compared.

    2. Set up user studies to examine how people orient themselves and manipulate objects in VR, with the goal of transferring this knowledge to the teleoperated robotics space. This can relate to visual orientation or to object manipulation and for example take the shape of an eye tracking study.

    3. Set up a study with a teleoperated robot to investigate the effects of time delays on several aspects of the interaction. This line requires both research and technical skills.

    Other assignments in this field can be discussed based on your skillset.

    Contact: Luc Schoot Uiterkamp l.schootuiterkamp@utwente.nl
    Second supervisor: Gwenn Englebienne g.englebienne@utwente.nl

  • Developing a Touch-Sensitive Ball and Analysing its Data @ HMI - Enschede, NL

    Description

    • In this project, we want to extend the touch-sensitive "skin" technology to measure touch on a ball or spherical object such as a football. The skin was developed to make robot shells aware of human touch, and is also used in the Touch-Sensitive Patch used in Module 7 CreaTe, but lacks some key characteristics that would be required to be able to measure how a ball is handled:
    • We can create touch-sensitive surfaces of any 2D shape, and we can make these surfaces stretchable to an extent, but it is not obvious how to create a reliable, touch-sensitive, closed spherical surface. The batteries, sensors, wireless communication devices and electronics for the sensing would need to be inside the sphere, preferably in centrally positioned to keep the weight distribution of the ball balanced. Ideally, it should also be easy to charge without opening up. The electronics need to be robust enough and protected from external forces to survive playing with the ball, including resisting being kicked hard. Ideally, the ball should be inflatable and have similar handling characteristics to a "dumb" ball and be sensitive to impact with a shoe (being kicked) as well as to the touch with a hand or arm.
    • Initial prototypes need not fulfil all these characteristics, but these are the guideline to the developments. An additional challenge for students interested in data analysis is in tracking both the orientation of the ball (pitch,roll, yaw) and the location of the touch, as well as in recognising how the ball is being handled.

    Contact person:
    Gwenn Englebienne
    g.englebienne@utwente.nl


COMPANIES, EXTERNAL RESEARCH INSTITUTES, AND END USER ORGANISATIONS

Here you find some of the organisations that are willing to host master students from HMI. Keep in mind that you are not allowed to have both an external (non-research institute) internship and an external final assignment. If you work for a company that is interested in providing internships or final assignments please contact D.K.J.Heylen[at]utwente.nl

  • UX/UI Student for Frontend Innovation at Sport Data Valley

    Wanted: UX/UI Student for Frontend Innovation at Sport Data Valley

    Are you a passionate UX/UI student with an eye for detail and a love for sports? Do you want to use your design skills to take the digital experience of coaches/trainers and researchers to the next level? Then we have the opportunity for you!

    What will you do?

    At Sport Data Valley, you will work on the frontend of our innovative platform, where sports and data come together. You will be responsible for improving their user experience and making their interface even more user-friendly and attractive. Your creative ideas and design skills directly contribute to the success of athletes, coaches and researchers.

    Your tasks:

    • Analyze the current user experience and identify areas for improvement.
    • Design wireframes, prototypes and visual elements that make the interaction more intuitive.
    • Collaborate with our development team to bring your designs to life.
    • Test and optimize the frontend based on user feedback.

    What do they offer?

    • An inspiring work environment where sports and technology come together.
    • Guidance and feedback from experienced professionals.
    • The opportunity to expand your portfolio with a challenging project.
    • Flexible working hours and the possibility to work partly remotely.

    What do you bring?

    • You are a HBO or WO student in the direction of UX/UI Design, Interaction Design or a similar study.
    • You have basic knowledge of frontend technologies such as HTML, CSS and JavaScript.
    • You have experience with design tools such as Figma or Adobe XD.
    • You are creative, solution-oriented and have a good sense of aesthetics.
    • A passion for sports and an interest in data analysis is a big plus!

    Interested?

    Send your portfolio, CV and a short motivation to dennis@sportdatavalley.nl

    We look forward to your fresh perspective and innovative ideas to take Sport Data Valley to the next level!

    Contacts:

  • Unleash the power of LLM for sports – Intern and thesis opportunities @ wearM.AI - Enschede, and HMI - Enschede, NL

    Background:
    wearM.AI, a cutting-edge spin-off from UTwente, is developing sensors for monitoring performance and provide individual coaching for runners. These sensors collect large amounts of data, but communicating it and its implications in a clear and understandable way to users can be a challenge.

    A Large Language Model (LLM) could potentially solve this problem by summarizing this data in English or Dutch, and explaining to users how to improve their running technique or performance.

    Task:
    wearM.AI is looking for students that want to conduct a (paid) internship and/or a thesis in collaboration with HMI, exploring how to convert sensor data in human language. Your task could include researching how time-series data can be embedded in a LLM and used to generate text, defining what kind of instructions or reports users would need, and testing the feasibility and usefulness of different technical solutions.

    Contact person:
    Huawei Wang (h.wang@wearm.ai; please include a short CV) or Lorenzo Gatti (l.gatti@utwente.nl)

  • Exchange projects Master Graduation @ Melbourne, Australia

    The Australian Catholic University in Melbourne offers a number of exciting graduation projects that revolve around the use of Virtual Reality to study matters related to sports and movement. Such topics are closely related to the scientific work on Sports Interaction Technology that are being carried out at the Human Media Interaction group. Below, you can find a brief description of the projects that you can work on. For your graduation project, you will receive joint supervision: both from prof. dr. Gert-Jan Pepping (ACU) and Dees Postma (UT). These graduation projects require technical skills related to Virtual Reality, programming, sensor systems (IMUs), and data processing. If you are interested, reach out to Dees Postma: d.b.w.postma@utwente.nl

    Keeping pace: Investigating the boundary between a sustainable and an unsustainable pace in endurance sports

    Pacing in sports is key to performance. This holds true not only in endurance sports but also in many team sports. To perform well, athletes need to control their energy expenditure in relation to their internal physiological state; their opponents; and in relation to the task they are performing. Pacing in a 5k race is likely much different from pacing a marathon; pacing whilst running alone is likely much different from running a race; and pacing strategies might differ depending on the task or objectives – training run or a competitive run. In any case, for an athlete to perform well, they should be sensitive to their action boundaries: that is, they should be able to distinguish between a sustainable rate of energy expenditure and an unsustainable rate of energy expenditure. Currently, little is known about the perceptual processes that inform an athlete about the sustainability of their current actions and how this informs their decision making on the field. To illustrate, an outfielder in baseball will likely not run to make an attempted catch for every ball that is batted – they need to be selective in order not to get worn out energetically. In this project, you will investigate the perceptual information that informs athletes of their action boundaries and shed light on how the perception of this action boundary influences decision making in sport.

    Using Virtual Reality to improving decision-making in baseball (in collaboration with Baseball Queensland)

    A critical aspect of decision-making and a baseball-player's on the pitch behaviour is their situation awareness (SA), that is, the level of awareness that an individual has of a situation; a player's dynamic understanding of 'what is going on around them' during the game. Research has shown that SA is importantly linked to player's decision-making development, performance, and rehabilitation. That is, SA: i. can be (and needs to be) developed from a young age, and needs to be promoted and maintained during training; ii. is related to player's and referee's performance and expertise; that is, better, more skilled/expert players/referees possess a higher degree of SA; iii. is related to injury proneness, as well as rehabilitation; that is, lowered SA is a precursor to injury, and increased/recovered SA can be used as an identifier for game readiness following rehabilitation. In this honours projects, which takes place in ACU's Perception-Action Rehabilitation Clinic and Learning Environment (PARCLE), we use Virtual Reality to assist player development, player monitoring, and rehabilitation in baseball.

    Improving decision-making in high performance team-sport

    A critical aspect of decision-making in team-sport and a player's on the pitch behaviour is their situation awareness (SA), that is, the level of awareness that an individual has of a situation; a player's dynamic understanding of 'what is going on around them' during the game. Research has shown that SA is importantly linked to player's decision-making, development, performance, and rehabilitation. That is, SA: i. can be (and needs to be) developed from a young age and needs to be promoted and maintained during training; ii. is related to player's and referee's performance and expertise; that is, better, more skilled/expert players/referees possess a higher degree of SA; iii. is related to injury proneness, as well as rehabilitation; that is, lowered SA is a precursor to injury, and increased/recovered SA can be used as an identifier for game readiness following rehabilitation. We have a number of honours projects, in which we use a wireless wearable technology system (SATS) to assist player development, player monitoring, and rehabilitation in team-sport (soccer, field-hockey, AFL) to address important research questions in skill acquisition and SA.

    Using Virtual Reality to prevent falls

    Gait-related falls are a large public health burden, and both the sheer number of gait-related falls, and the associated societal costs continue to increase. Recent research has shown that an individual's ability to adapt their gait is an important factor related to gait-related falls and mobility as people age. In the current honours project, which takes place in ACU's Perception-Action Rehabilitation Clinic and Learning Environment (PARCLE), we will use Virtual Reality and the task of bushwalking as an activity that can improve gait adaptability of community dwelling older adults. Suitable for exercise science, high performance sport, science, and psychology students.

    Contact person: Dees Postma, d.b.w.postma@utwente.nl

  • Biobank catalogue using automated informational retrieval and AI solutions @ Amsterdam UMC, NL

    Background
    Cancer Center Amsterdam connects cancer researchers and care professionals within Amsterdam UMC. To facilitate translational cancer research (i.e. the validation of laboratory findings within a clinical context, for example using blood or urine samples collected from patients with cancer) a central biobanking organization has been established. This Liquid Biopsy Center (LBC) collects blood and urine samples through centralized logistics and harmonized protocols and makes these samples available to cancer researchers. For good research, it is of the greatest importance that samples are clinically annotated with relevant information on diagnosis, disease stage, and treatment interventions and outcome. This information is recorded in the hospital electronic health record (EHR), mostly in the form of free text. Consequently, clinical data management is still mostly done through manual data entering into a clinical database. Automated solutions are needed to improve this labor-intensive and inefficient way of data management.

    Methods
    LBC supports 16 biobank projects based on related tumor type (for example lung cancer, colon cancer, hematological tumors). Over all projects, more than 4500 patients donated almost 9000 samples that are stored in the freezers for future research projects. Using data extractions of structured data from the EHR, a first setup of a comprehensive sample dashboard has been created for the lung cancer biobank using Power BI. Retrieval of specific samples through this dashboard can largely be improved if information from the EHR stored as free text – for example from radiologie or pathology reports – can be added. This project aims to improve the existing dashboard as well as to expand and tailor it to other biobanks.

    End product
    The envisioned end product is a user dashboard that couples all available information from different databases, including free text from the EHR. The dashboard can be used to retrieve specific samples requested by cancer researchers for use in a dedicated research project. The first setup in Power BI can be used as a basis for this project. Use of other platforms can be discussed, but restrictions involving working with sensitive health data apply. This means that data cannot leave the secured hospital environment and only platforms that work within the (remote) hospital server can be used. If successful, the end product will be widely shared with researchers to set an example of improve data management solutions.

    Learning experience
    This project offers the student to get acquainted with the academic health care and research sector along with its promises and pitfalls regarding the use of health care data for research, in this instance specifically cancer biobank research. Innovations in the use of data for improved health care planning and research are urgently needed and at the same time hampered by strict laws and regulations. This project offers the opportunity to learn from all the aspects involving medical research and at the same time work on technical solutions that will actually improve research.

    Contact person Human Media Interaction (UT):
    Shenghui Wang, shenghui.wang@utwente.nl

  • Digital solutions for livestock management @ Nedap - Groenlo, NL

    Nedap Livestock Management develops digital solutions for, among others, dairy farming and pig farming. They are open to internships and thesis projects; some examples of possible project topics can be found below. When you are interested, feel free to contact Robin Aly for more information.

    Nedap contact: Robin Aly, robin.aly@nedap.com
    HMI contact: Dennis Reidsma, d.reidsma@utwente.nl

    Virtual Fencing
    The goal of this project is to define a product concept for virtual fencing. Cows on pastures need enough feed to graze. Farmers are faced with the challenge to manage the available land to ensure the herd constantly has sufficient feed available. Traditional approaches to this problem move physical fences to direct a herd to the later pastures. However, this process is labor intensive and slow in reacting to changing environments. Virtual fencing [1-4] has been recently defined as a means of interacting with cows based on its location using a reward and punishment system to give them incentives to move to more suitable pastures. This project will investigate solutions for a virtual fencing product.

    The project will start with an ideation process with farmers that will define potentially feasible cow locating solutions, ways to define a virtual fence, and ways to interact with cows to steer them. In a second step, at least one of these ideas will be extended to a high fidelity prototype and evaluate for its performance.

    [1] https://www.wur.nl/en/article/Virtual-fencing-grazing-without-visible-borders.htm
    [2] https://www.smartcompany.com.au/startupsmart/news/halter-29-million-agtech-dairy-cows/
    [3] Anderson, D. M. (2007). Virtual fencing–past, present and future1. The Rangeland Journal29(1), 65-78.
    [4] Campbell, D. L., Lea, J. M., Haynes, S. J., Farrer, W. J., Leigh-Lancaster, C. J., & Lee, C. (2018). Virtual fencing of cattle using an automated collar in a feed attractant trial. Applied animal behaviour science200, 71-77.

    Locating Pigs
    The goal of this project is to provide means for farmers to their locate pigs. Nowadays, professional pig farms can have thousands of pigs. When kept in group housing concepts farmers are often faced with the task to locate individual pigs in these groups, for example to diagnose or treat illnesses. Locating pigs, however, is currently a cumbersome and time-consuming task as groups can be large. Currently, identifying an individual pig can only be done up close. This project requires an end-to-end development of a product concept following the design thinking process, including ideation with stake holders and creation of a prototype.  

    Potential solutions to pig locating consist of centralized position systems, collaborative positioning systems, systems that detect crossing between demarked areas, and systems sensing the coarse area where a pig resides. These solutions are constrained by the investment that they ask from the farmer and differ in how far they satisfy his or her need for pig locating. Therefore, the projects will start with an ideation session with farmers and other stakeholders to define a suitable solution space. The ideation will be supported by low fidelity prototype that facilitate the discussion about the concepts. An important output of ideation is also the identification of key performance measures that can be used to judge the quality of a system.

    Based on the output of the ideation step, at least one high fidelity prototype will have to developed and evaluated.

    [1] Zhuang, S., Maselyne, J., Van Nuffel, A., Vangeyte, J., & Sonck, B. (2020). Tracking group housed sows with an ultra-wideband indoor positioning system: A feasibility study. Biosystems Engineering200, 176-187.
    [2] Koyuncu, H., & Yang, S. H. (2010). A survey of indoor positioning and object locating systems. IJCSNS International Journal of Computer Science and Network Security10(5), 121-128.
    [3] Fukuju, Y., Minami, M., Morikawa, H., & Aoyama, T. (2003, May). DOLPHIN: An autonomous indoor positioning system in ubiquitous computing environment. In Proceedings IEEE Workshop on Software Technologies for Future Embedded Systems. WSTFES 2003 (pp. 53-56). IEEE.
    [4] Mainetti, L., Patrono, L., & Sergi, I. (2014, September). A survey on indoor positioning systems. In 2014 22nd international conference on software, telecommunications and computer networks (SoftCOM) (pp. 111-120). IEEE.

  • Adding Speech to Multi Agent Dialogues with a Council of Coaches @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL

    Context

    In the EU project Council of Coaches (COUCH) we are developing a team of virtual coaches that can help older adults achieve their health goals. Each coach offers insight and advice based on their expertise. For example, the activity coach may talk about the importance of physical exercise, while the social coach may ask the user about their friends and family. Our system enables fluent multi-party interaction between multiple coaches and our users; in addition to talking directly with the user, the coaches may also have dialogues amongst themselves. Integration of full spoken interaction with the platform developed in COUCH will ake a major leap possible for our embodied agent projects.

    More information: https://cordis.europa.eu/project/id/769553 

    Challenge

    Currently in COUCH the user interacts with the coaches by selecting one of several predefined multiple-choice responses on a tablet or computer interface. Although this is a reliable way to capture input from the user, it may not be ideal for our target user group of older adults. Perhaps spoken dialogue can offer a better user experience?

    In the past, researchers found that it was quite difficult to sustain dialogues that relied on automatic speech recognition (ASR) (e.g. see[1] Bickmore & Picard, 2005). However, recent commercial systems like Apple’s Siri and Amazon’s Alexa offer considerable improvements in recognising user’s speech. Such state of the art systems might now be sufficiently reliable for supporting high-quality spoken dialogues between our coaches and the user.

    Assignment

    In your project you will adapt the COUCH system to support spoken interactions. In addition to incorporating ASR, you will investigate smart ways to organise the dialog to facilitate adequate recognition in noisy and uncertain settings while keeping the conversation going. Finally, you will also evaluate the user experience and the quality of dialog progress in various settings and thereby the suitability of state of the art speech recognition for live, open-setting spoken conversation.

    You will carry out the work with collaboration between Roessingh Research and Development (http://www.rrd.nl/) and researchers at the Human Media Interaction group of the University of Twente.

    Contact

    Dennis Reidsma (d.reidsma@utwente.nl)

    [1] Timothy W. Bickmore and Rosalind W. Picard. 2005. Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. 12, 2 (June 2005), 293–327. DOI: https://doi.org/10.1145/1067860.1067867

  • Large-scale data mining & NLP @ OCLC - Leiden, NL

    OCLC is a global library cooperative that provides shared technology services, original research and community programs for its membership and the library community at large. Collectively with member libraries, OCLC maintains WorldCat, the world’s most comprehensive database of information about library collections. WorldCat now hosts more than 460 million bibliographic records in 483 languages, aggregated from 18,000 libraries in 123 countries.

    As the WorldCat continues to grow in quantity, OCLC is actively exploring data science, advanced machine learning, linked data and visualisation technologies to improve data quality, transform bibliographic descriptions into actionable knowledge, as well as provide more functionality for professional cataloguers and develop more services for end users of the libraries. 

    OCLC is constantly looking for students who are enthusiastic to advance AI technologies for library and other cultural heritage data. Examples of student assignments are:

    • Fast and scalable semantic embedding for Information Retrieval
    • eXtreme
      Multi-label Text Classification (XMTC) for automatic subject prediction
    • Automatic
      image captioning for Cultural Heritage collections 
    • Entity extraction and disambiguation
    • Entity matching across different media (e.g. books, articles, cultural heritageobjects, etc) or across languages
    • Hierarchical clustering of bibliographic records
    • Constructing knowledge graphs around books, authors, subjects, publishers, etc.  
    • Interactive visualisation of library data on geographic maps and/or along a time dimension
    • Concept drift (i.e., how meaning changes over time) and its effects on Information Retrieval 
    • Scientometrics-related topics based on co-authoring networks and/or citation networks

    More details are available on request. 

    Contact: Shenghui Wang
    Email: shenghui.wang@utwente.nl

  • Robotics and mechatronics @ Heemskerk Innovative Technology (Delft)

    Company Information:

    Heemskerk Innovative Technology provides advice and support to innovative high-tech projects in the field of robotics and mechatronics. Our mission: Convert basic research into innovative business concepts and real-world applications by creating solutions for performing actions where people themselves can not reach: making the world smaller, better integrated and in an intuitive way.

    Focus areas:
    Haptics
    Dexterous manipulation
    Master-slave control
    Dynamic contact
    Augmented Reality

    https://heemskerk-innovative.nl

    Example assignments (to be carried out in the first half of 2021):

    Current assignments focus on user robot interaction, object detection and autonomous object manipulation in real-life settings, human detection and tracking for navigation in human-populated environments as part of developing the ROSE healthcare robot. Background on C++/Python and ROS is a pre for students working on these assignments.

    Contact:
    Mariët Theune (EEMCS) <m.theune@utwente.nl>

  • Addiction, Coaching and Games @ Tactus - Enschede, NL

    Tactus is specialized in the care and treatment of addiction. They offer help to people who suffer from problems as a result of their addiction to alcohol, drugs, medication, gambling or eating. They help by identifying addiction problems as well as preventing and breaking patterns of addiction. They also provide information and advice to parents, teachers and other groups on how to deal with addiction.

    Assignment possibilities include developing game-like support and coaching apps.

    Website: https://www.tactus.nl/enschede

    Contact: Randy Klaassen

  • Stories and Language @ Meertens Institute - Amsterdam, NL

    The Meertens Institute, established in 1926, has been a research institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 1952. They study the diversity in language and culture in the Netherlands, with a focus on contemporary research into factors that play a role in determining social identities in the Dutch society. Their main fields are:

    • ethnological study of the function, meaning and coherence of cultural expressions
    • structural, dialectological and sociolinguistic study of language variation within Dutch in the Netherlands, with the emphasis on grammatical and onomastic variation.

    Apart from research, the institute also concerns itself with documentation and providing information to third parties in the field of Dutch language and culture. We possess a large library, with numerous collections and a substantive documentation system, of which databases are a substantive part.

    Assignments include text mining and classification and language technology, but also usability and interaction design.

    Website of the institute: http://www.meertens.knaw.nl/cms/

    Contact: Mariët Theune

  • Language and Retrieval @ Elsevier - Amsterdam, NL

    Elsevier is the world's biggest scientific publisher, established in 1880. Elsevier publishes over 2,500 impactful journals including Tetrahedron, Cell and The Lancet. Flagship products include ScienceDirect, Scopus and Reaxys. Increasingly, Elsevier is becoming a major scientific information provider. For specific domains, structured scientific knowledge is extracted for querying and searching from millions of Elsevier and third-party scientific publications (journals, patents and books). In this way, Elsevier is positioning itself as the leading information provider for the scientific and corporate research community.

    Assignment possibilities include text mining, information retrieval, language technology, and other topics.

    Contact: Mariët Theune

  • Using (neuro)physiological signals @ TNO -- Soesterberg, NL

    At TNO Soesterberg (department of Perceptual and Cognitive Systems) we investigate how we can exploit physiological signals such as EEG brain signals, heart rate, skin conductance, pupil size and eye gaze in order to improve (human-machine) performance and evaluation. An example of a currently running project is predicting individual head rotations from EEG in order to reduce delays in streaming images in head mounted displays. Other running projects deal with whether and how different physiological measures reflect food experience. Part of the research is done for international customers. 

    More examples of projects as reflected in papers are on Google Scholar

    We welcome students with skills in machine learning and signal processing and/or who would like to setup experiments, work with human participants and advanced measurement technology.

    Contact: Jan van Erp <j.b.f.vanerp@utwente.nl>

  • AR for Movement and Health @ Holomoves - Utrecht, NL

    Holomoves is a company in Utrecht that combines Hololens Augmented Reality with expertise in health and physiotherapy, to offer new interventions for rehabilitation and healthy movement in a medical setting. Students can work with them on a variety of assignments including design, user studies, and/or technology development.

    More information on the company: https://holomoves.nl/

    Contact person: Robby van Delden, Dennis Reidsma 

  • Artificial Intelligence & NLP @ Info Support - Veenendaal, NL

    Info Support is a software company that makes high-end custom technology solutions for companies in the financial technology, health, energy, public transport, and agricultural technology sectors. Info Support is located in Veenendaal/Utrecht, NL with research locations in Amsterdam, Den Bosch, and Mechelen, Belgium.

    Info Support has extensive experience when it comes to supervising graduation students. With assignments that do not only have added scientific value, but also impact the clients of Info Support and their clients’ clients. As a university-level graduating student, you will become part of the Research Center within Info Support. This is a group of colleagues who, on top of their job as a consultant, have a strong affinity with scientific research. The Research Center facilitates and stimulates scientific research, with the objective of staying ahead in Artificial Intelligence, Software Architecture, and Software Methodologies that most likely will affect our future.

    Various research assignments in Artificial Intelligence, Machine Learning and Natural Language Processing can be carried out at Info Support.

    Examples of assignments include:

    • finding a way to anonymize streaming data in such a way that it will not affect the utility of AI and Machine Learning models
    • improving the usability of Machine Learning model explanations to make them accessible for people without statistical knowledge
    • generating new scenarios for software testing, based on requirements written in a natural language and definitions of logical steps within the application

    More details are available on request.

    Contact: Mariët Theune