HMI Student Assignments

If you as a bachelor or master student are looking for a final thesis assignment, capita selecta, research topic, or internship, you can choose between a large number of internal (at HMI) or external (at a company, research institute, or user organisation) assigments. You can also choose to create one yourself. Searching a suitable location or a suitable topic often starts here: below you find a number of topics you can work on.  

Note that in preparation for a final MSc assignment you must carry out a Research Topics resulting in a final project proposal. 

ASSIGNMENTS AND TOPICS THAT CAN BE CARRIED OUT INTERNALLY AT HMI

  • Exchange project Master Graduation – Melbourne, Australia

    The Australian Catholic University in Melbourne offers a number of exciting graduation projects that revolve around the use of Virtual Reality to study matters related to sports and movement. Such topics are closely related to the scientific work on Sports Interaction Technology that are being carried out at the Human Media Interaction group. Below, you can find a brief description of the projects that you can work on. For your graduation project, you will receive joint supervision: both from prof. dr. Gert-Jan Pepping (ACU) and Dees Postma (UT). These graduation projects require technical skills related to Virtual Reality, programming, sensor systems (IMUs), and data processing. If you are interested, reach out to Dees Postma: d.b.w.postma@utwente.nl

    Keeping pace: Investigating the boundary between a sustainable and an unsustainable pace in endurance sports

    Pacing in sports is key to performance. This holds true not only in endurance sports but also in many team sports. To perform well, athletes need to control their energy expenditure in relation to their internal physiological state; their opponents; and in relation to the task they are performing. Pacing in a 5k race is likely much different from pacing a marathon; pacing whilst running alone is likely much different from running a race; and pacing strategies might differ depending on the task or objectives – training run or a competitive run. In any case, for an athlete to perform well, they should be sensitive to their action boundaries: that is, they should be able to distinguish between a sustainable rate of energy expenditure and an unsustainable rate of energy expenditure. Currently, little is known about the perceptual processes that inform an athlete about the sustainability of their current actions and how this informs their decision making on the field. To illustrate, an outfielder in baseball will likely not run to make an attempted catch for every ball that is batted – they need to be selective in order not to get worn out energetically. In this project, you will investigate the perceptual information that informs athletes of their action boundaries and shed light on how the perception of this action boundary influences decision making in sport.

    Using Virtual Reality to improving decision-making in baseball (in collaboration with Baseball Queensland)

    A critical aspect of decision-making and a baseball-player's on the pitch behaviour is their situation awareness (SA), that is, the level of awareness that an individual has of a situation; a player's dynamic understanding of 'what is going on around them' during the game. Research has shown that SA is importantly linked to player's decision-making development, performance, and rehabilitation. That is, SA: i. can be (and needs to be) developed from a young age, and needs to be promoted and maintained during training; ii. is related to player's and referee's performance and expertise; that is, better, more skilled/expert players/referees possess a higher degree of SA; iii. is related to injury proneness, as well as rehabilitation; that is, lowered SA is a precursor to injury, and increased/recovered SA can be used as an identifier for game readiness following rehabilitation. In this honours projects, which takes place in ACU's Perception-Action Rehabilitation Clinic and Learning Environment (PARCLE), we use Virtual Reality to assist player development, player monitoring, and rehabilitation in baseball.

    Improving decision-making in high performance team-sport

    A critical aspect of decision-making in team-sport and a player's on the pitch behaviour is their situation awareness (SA), that is, the level of awareness that an individual has of a situation; a player's dynamic understanding of 'what is going on around them' during the game. Research has shown that SA is importantly linked to player's decision-making, development, performance, and rehabilitation. That is, SA: i. can be (and needs to be) developed from a young age and needs to be promoted and maintained during training; ii. is related to player's and referee's performance and expertise; that is, better, more skilled/expert players/referees possess a higher degree of SA; iii. is related to injury proneness, as well as rehabilitation; that is, lowered SA is a precursor to injury, and increased/recovered SA can be used as an identifier for game readiness following rehabilitation. We have a number of honours projects, in which we use a wireless wearable technology system (SATS) to assist player development, player monitoring, and rehabilitation in team-sport (soccer, field-hockey, AFL) to address important research questions in skill acquisition and SA.

    Using Virtual Reality to prevent falls

    Gait-related falls are a large public health burden, and both the sheer number of gait-related falls, and the associated societal costs continue to increase. Recent research has shown that an individual's ability to adapt their gait is an important factor related to gait-related falls and mobility as people age. In the current honours project, which takes place in ACU's Perception-Action Rehabilitation Clinic and Learning Environment (PARCLE), we will use Virtual Reality and the task of bushwalking as an activity that can improve gait adaptability of community dwelling older adults. Suitable for exercise science, high performance sport, science, and psychology students.

  • Large Language Model-Based Sport Coaching System Using Retrieval-Augmented Generation and User Models @ HMI - Enschede, NL

    Background:
    Large language models are advanced artificial intelligence systems trained on vast amounts of text data, capable of understanding and generating human-like language. These models, such as OpenAI's GPT series, use deep learning techniques to capture linguistic patterns and generate relevant text across a wide range of tasks and domains. Using prompt engineering, these models can be instructed to generate natural language coaching strategies that can be used by artificial agents (chatbots, social robots, virtual agents). However, the generated responses need to be personalized to the user. User models in the context of AI systems refer to representations of individual users' characteristics, preferences, and behaviors, which are used to personalize interactions and tailor recommendations. These models enable the system to adapt its responses or actions based on factors such as user demographics, past interactions, and stated preferences. We can use these user models to tailor the model's responses using retrieval-augmented generation. This technique combines generative models with retrieval-based methods to improve the quality and relevance of generated content. In this approach, the system retrieves relevant information or context from a pre-existing database or corpus. It incorporates it into the generation process, ensuring that the generated output is grounded in real-world knowledge or context.

    Task:
    Design and implement a sports coaching (recommendation) system that utilizes large language models, retrieval-augmented generation (RAG) techniques, and user models to provide personalized coaching and guidance to trainees. The project aims to evaluate the practical application of pre-trained models in sports coaching without retraining using RAG techniques.

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • A Comparative Study of Touch Feedback in Coaching Scenarios: Haptic Vest vs. Embodied Social Robot in Squatting Exercises @ HMI - Enschede, NL

    This project aims to compare the effectiveness of touch feedback provided by a haptic vest versus an embodied social robot in coaching squatting exercises. You will investigate how tactile feedback influences user performance, technique improvement, and overall user experience in physical training scenarios.

    • Tasks:
      Design an experimental setup for comparing touch feedback provided by a haptic vest and an embodied social robot during squatting exercises.
    • Research the relevant parameters to be measured, such as squatting technique, user comfort, perceived exertion, and motivation.
    • Develop a haptic vest prototype capable of providing tactile feedback corresponding to squatting movements, such as vibrations or pressure sensations.
    • Implement an embodied social robot system with sensors and actuators to provide physical guidance and feedback during squatting exercises.
    • Evaluate the effectiveness of these technologies in a user experiment.

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • Non-stationary preference learning in online human-robot interaction @ HMI - Enschede, NL

    The quest for autonomous and adaptive behavior in robotics has led to a growing interest in Preference-Based Reinforcement Learning (PBRL), which leverages human preferences to guide learning, particularly in dynamic and unpredictable environments. PBRL stands as a bridge between the advanced learning capabilities of robots and the nuanced preferences humans possess, although it faces challenges due to the non-stationary nature of human preferences. Non-stationary dueling bandits, however, offer a compelling solution by incorporating mechanisms to adapt to changing preferences over time. These adaptive algorithms enable robots to maintain high adaptability, providing a more personalized and satisfying user experience, especially in long-term interactions. Challenges in this field include efficiently sampling users' limited feedback, balancing exploration and exploitation, incorporating context-dependent preferences, handling noisy feedback, and ensuring learned preferences lead to meaningful and safe robot behavior. This innovative approach holds promise for creating robotic systems that evolve alongside dynamic human preferences, with applications spanning various domains such as assistive robots, rehabilitation robotics, social companionship, interactive learning environments, manufacturing, collaborative work, and entertainment.

    The tasks involve:
    (1) familiarizing oneself with literature on preference-based reinforcement learning or adaptation algorithms in non-stationary situations,
    (2) reviewing literature on adaptive robot behavior in human-robot interaction scenarios and selecting a preferred use case,
    (3) developing a human-robot interaction use case where adapting to user preferences is crucial and can vary over time,
    (4) selecting a suitable online preference learning algorithm for the chosen use case and implementing it,
    (5) conducting a user study to evaluate the effectiveness of adapting to changing preferences and user experience, and finally,
    (6) analyzing the data from the study and presenting the results in a report.

    Contact person: Sebastian Schneider, s.schneider@utwente.nl

  • Lifestyle Data Futures @ HMI - Enschede, NL

    Non-communicable diseases (NCDs), also known as chronic diseases, are noninfectious or non-transmissible diseases, such as cardiovascular diseases, diabetes, cancers, or chronic respiratory diseases [1]. These are usually associated with a person's lifestyle or behavior (for example, bad diet, physical inactivity, smoking). According to the World Health Organization, non-communicable diseases (NCDs) are the leading cause of death and disability in the world, causing 41 million deaths every year [2,3]. This is equivalent to 74% of all deaths globally [3]. Various risk factors contribute to NCDs. These include modifiable behavioral risk factors such as physical inactivity, unhealthy diet, and alcohol or tobacco use; metabolic risk factors such as high blood pressure, obesity, high blood glucose levels, and high levels of fat in the blood and environmental risk factors such as air pollution [2-6]. NCDs can be prevented, or the risk can be reduced by managing these risk factors. Communication of the risk factors and stimulating a healthy lifestyle and change in behavior are vital for achieving this. Current means of communicating these risk factors and lifestyle change recommendations are mainly manual or dominantly app-based (either mobile or desktop screen-based apps). While app-based systems are easy to create and deploy, existing research has shown limitations such as display blindness, attention overload through notifications, low recall in content, and lack of social and contextual situated information [7].

    Tangible user interfaces (TUIs), Data Physicalisations (physical representations of data) [8] and embedded data representations [9] offer a unique opportunity to address this issue in a different way. This project aims to explore the potential of tangible user interfaces and data physicalizations/ sensifications for:

    • communication of data associated with modifiable lifestyle risk factors (e.g. physical activity data, blood sugar level, etc.)
    • designing interventions for stimulating lifestyle/ behavior change toward managing risk factors associated with NCDs
    • data experiences embedded in everyday spaces >> we are particularly interested in making data and interventions ambient

    You will explore various physical, tangible means for communicating risk factors and potential interaction strategies and design future lifestyle data experiences.

    Lifestyle Data Futures offers a diverse set of student assignments (MSc thesis, Capita Selecta) and can take various forms (e.g developing interactive systems, exploring how edge devices, sensing and actuation can be used to realize interventions, empirical studies to evaluate interventions, participatory design projects to design potential interventions, etc.).

    This thesis calls for students with various diverse backgrounds such as Smart technology, Electrical engineering, Human computer interaction, Interaction technology, Embedded systems, Ubiquitous computing, or Biosignals & Systems, etc.

    Would like to know more? Contact: Champika Ranasinghe, c.m.eparanasinghe@utwente.nl

    References:

    [1].  United Nations Children's Fund (UNICEF), (April, 2021). Non-communicable diseases. [Online]. Available at: https://data.unicef.org/topic/child-health/noncommunicable-diseases/, last accesed on 25-11-2023.

    [2].  Pan American Health Organization, Regional Office for the Americas of the World Health Organization(n.d.). Noncommunicable Diseases. [Online]. Available at: https://www.paho.org/en/topics/noncommunicable-diseases, last accesed on 25-11-2023.

    [3].  World Health Organization, (Sep, 2023). Noncommunicable Diseases. [Online]. Available at: https://www.who.int/news-room/fact-sheets/detail/noncommunicable-diseases, last accessed on 25-11-2023.

    [4].  Al-Maskari, F, United Nations (n.d.). Lifestyle Diseases: An Economic Burden on the Health Services. [Online]. Available at: https://www.un.org/en/chronicle/article/lifestyle-diseases-economic-burden-health-services, last accesed on 25-11-2023.

    [5].  Centers for Disease Control and Prevention, U.S. Department of Health & Human Services (Oct 2020). Lifestyle Risk Factors. [Online]. Available at: https://www.cdc.gov/nceh/tracking/topics/LifestyleRiskFactors.htm#anchor_1606671360465, last accessed on 25-11-2023.

    [6].  Budreviciute, A., Damiati, S., Sabir, D. K., Onder, K., Schuller-Goetzburg, P., Plakys, G.,, Katileviciute, A., Khoja, S. & Kodzius, R. (2020). Management and prevention strategies for non-communicable diseases (NCDs) and their risk factors. Frontiers in public health8, 788.

    [7].  Brombacher, H., Houben, S., & Vos, S. (2023). Tangible interventions for office work well-being: approaches, classification, and design considerations. Behaviour & Information Technology, 1-25.

    [8].  Jansen, Y., Dragicevic, P., Isenberg, P., Alexander, J., Karnik, A., Kildal, J., ... & Hornbæk, K. (2015, April). Opportunities and challenges for data physicalization. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 3227-3236).

    [9].  Willett, W., Jansen, Y., & Dragicevic, P. (2016). Embedded data representations. IEEE transactions on visualization and computer graphics, 23(1), 461-470.

  • Virtual Reality for Motor Learning in Sports @ HMI - Enschede, NL

    Virtual Reality holds great promise for skill acquisition and motor learning in sports. Virtual Reality is particularly well suited for creating systematically varied, controlled environments in which the athlete can safely practice complex motor movements. Moreover, VR allows for the provisioning of rich (visual) augmented feedback that goes beyond what is possible in real life – flexibly blending qualities from both the digital and the physical world. Finally, modern-day Virtual Reality setups are powered by accurate trackers that allow for the automatic measurement of motor behavior – recording objective measurements of performance.

    Despite these positive qualities, little is known about the factors that promote or thwart the transfer of motor learning from the Virtual Reality training context to the real-world performance context. This is especially true for situations wherein the nature of the VR feedback is different from the feedback that is typically given in sports practice.

    In sports, the demands on transfer are high as motor behavior in sports needs to be swift and precise to be successful. In this graduation project, you will design and work with Virtual Environments that enable you to investigate the elements that impact motor learning, either positively or negatively.

    If you are interested, reach out to Dees Postma: d.b.w.postma@utwente.nl

  • The Design and Development of Interactive Technology to Benefit the Diagnostic Process for Autism @ HMI - Enschede, NL

    Developmentally diverse individuals, including people with autism, exhibit movement coordination patterns (both individual and during interactions with others) that are different from their typically developing peers. Autistic individuals often show lower levels of coordination, atypical gait patterns, and are less physically active.

    As autism is increasingly diagnosed at adult ages, and females are more likely to be un-, mis- or late diagnosed, objective motor assessments will aid autism diagnosis. As such, we will design and develop interactive technologies that systematically measure diagnostically relevant motor behaviours. We will design the technology to be easy to use and scalable. We will use state-of-the art motion capture systems (e.g., XSens, OpenPose, MediaPipe) to sense motor behaviour and use persuasive technologies, such as interactive projections, tangibles, and wearables to elicit diagnostically relevant motor behaviours.

    We will start with a systematic literature review on the movement- based markers of autism. Then, we will use these markers to design meaningful and ecologically valid activities that embody the movement-markers for autism.

    If you are interested, reach out to Dees Postma: d.b.w.postma@utwente.nl

  • Design of an Interactive Shared-Health Decision Making System @ UT - Enschede, NL

    Background: Currently, we can use various tools such as mobile phone apps and smartwatches to self-manage our physical and mental health. We can track various data, such as heart rate, sleep quality, daily physical activity, and even the estimated nutritional value of the food we eat. Based on this data, technology can suggest ways of improving the quality of our health. The suggestions can be going for a walk, eating different types of food, sleeping more hours or consulting a doctor when data signals something is wrong. These suggestions are regarded as the early and simple steps of shared decision-making through human-technology partnerships.

     

    This project aims to understand the extent and limits of human trust in human-technology partnerships during shared health decision-making. The goal of the project is to identify ways of empowering individuals and society to make informed health decisions to enhance human autonomy in such human-technology partnerships.

     

    Assignment description:

    1.       To understand the different approaches in shared decision making that is existing in the market and deciding and developing the optimal behavioural science and physiological science solution for the same.

    2.       Design and development of mobile or wearable sensor UI for this purpose. It would be an asset if the student is comfortable to explore front and back end development.

    3.       The design can also yield physical prototyping, therefore we do not limit the project to UI design. Any tangible physical interaction design solutions are welcome.

    4.       Test the solution with a number of users, potentially using a wearable device.

     

    Research and Design Questions:

    ·         What are the benefits and challenges of designing for shared-health decision making?

    ·         How do human-technology partnerships shape shared health decision-making?

    ·         How can we design for better design for the shared health decision making by informing the users?

     

    Master student project

    We are looking for a Master’s student with an interest in the background sketched above who is eager to perform a user centred design and evaluation of a shared health-decision making prototype based on the health data needs and user experience. During the project, the student is expected to refer to the findings of the supervisor about the experience in digital health technologies. The project will be complying with the GDPR guidelines. Hence, knowledge of health data privacy is appreciated.

     

    About the Project

    This project is part of a NWO funded project entitled My Life, My Decision”: Trust issues in Human-Technology Shared Health Decision-Making (MINDED).

     

    For this project the Masters student will be supervised by Dr. Armağan Karahanoğlu, with support from Dennis Reidsma from HMI, and will work closely with the post-master researcher Sterre van Arum. For additional information about the project please contact her (a.karahanoglu@utwente.nl) (or Dennis Reidsma at d.reidsma@utwente.nl).

  • Designing Expressive Drone Movements to Support Running Movement @ HMI - Enschede, NL

    Introduction
    In the field of human communication, kinesics (generation and interpretation of body movements) plays a pivotal role in conveying information, emotions, and attitudes [1]. This dynamic aspect of human interaction has been extended to the domain of robotics, where researchers have explored the use of robotic movements to express a wide range of information, from simple intent signalling to intricate human movements [2]. Previous studies in Human-Robot Interaction (HRI) have demonstrated that users perceive and respond to robots' movements as expressive, even when unintentionally anthropomorphic [2]. These robotic motions have been interpreted by users as imbued with emotions, spanning from calming to agitated, consequently transforming into expressive movements due to their associations with emotions [2].  Earlier works have also suggested that users favour interacting with robots equipped with expressive movement capabilities and are inclined to adjust their behaviour based on these expressive robotic motions [2].

    Among the myriad of robots explored in this context, drones stand out as a particularly promising platform. Drones possess the capacity to navigate along six different axes (three rotational and three translational) enabling the generation of intricate expressive movements. Earlier works have explored the potential for expressive drone movements in improvisational theatre (using Laban's effort theory) [3], and to convey the intent or status of the drone [4-6]. Building on this foundation, this research project would explore the untapped potential of using drone movements to support running activities. Previous studies conducted by our group have indicated a positive reception among runners towards the incorporation of drones in their running routines [manuscript will be provided]. While initial findings have indicated runners preferences of simple drone motions along the six axes in conveying running-related information, our goal is to explore how more expressive drone movements can convey specific details about various running parameters

    Distinguishing drones from humanoid robots, drones cannot replicate human motions in detail.  Instead, we must rely on abstract representations. Research has demonstrated that humans are not only adept at extracting information from minimal and abstract visual cues but also possess the capability to attribute emotions and intentions to abstract movements [1]. These abstract expressive movements have the potential to effect changes in human behaviour, a potential that remains largely unexplored in the context of drones and, more specifically, in the domain of running.

    In this project you will can draw upon a diverse range of concepts and principles of expressive motion, spanning from animation techniques to insights from the field of robotic expressive movements. This project will be an interdisciplinary endeavour, with the specific approach to be determined collaboratively between the students and the supervisor, ensuring a well-informed and innovative exploration of this exciting research avenue.

    Goal:
    The goal of this project is to develop an interactive experience driven by expressive drone movements. In pursuit of this goal, your task involves designing a set of expressive drone movements to provide runners with feedback about their running movement. Achieving this objective necessitates conducting a series of user studies and literature review. Throughout this process, you will explore which aspects of the drone's trajectory and movement contribute to expressiveness and how these elements correlate with the running feedback to be conveyed.

    To facilitate the design and testing of this interactive experience, the group will have the option to implement it using the Crazyflie drone system (currently in the process of procurement) or to create a virtual experience where the drone's actions are animated and displayed on a screen. Further details regarding the implementation approach will be discussed at a later stage. If you choose to work with the Crazyflie system, you will also contribute to setting up the system for potential use in future research projects.

    Additionally, students are expected to evaluate the effectiveness of the designed movements by identifying and measuring relevant metrics uncovered during their research in this domain. It's important to note that these studies will be conducted indoors.

    Student Tasks: (Suggestions in no particular order)

    1) Literature review and user studies (where applicable): Understand the concept of expressive movements in the context of robotics. Identify specific running parameters that warrant attention when delivering feedback through expressive movements.

    2) Design Workshops: To engage relevant users to create pre-technology based movement designs.

    3) Programming Drones/Generating Animations: To visualize the movements gathered using workshop insights.

    4) Interactive Experience Development: Create an interactive experience that responds to specific running movements.

    5) User Study for Evaluation: Evaluate designed movements using predetermined metrics.

    The students have creative freedom in designing the study methodology, provided it's logical and sound. The use of a drone, if available, is recommended for an immersive user experience, although its availability may be subject to change.

    References/Reading Materials:

    1. Hoffman, Guy, and Wendy Ju. "Designing robots with movement in mind." Journal of Human-Robot Interaction 3.1 (2014): 91-122.

    2. Venture, Gentiane, and Dana Kulić. "Robot expressive motions: a survey of generation and evaluation methods." ACM Transactions on Human-Robot Interaction (THRI) 8.4 (2019): 1-17.

    3. Sharma, Megha, et al. "Communicating affect via flight path exploring use of the laban effort system for designing affective locomotion paths." 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2013.

    4. Bevins, Alisha, and Brittany A. Duncan. "Aerial flight paths for communication: How participants perceive and intend to respond to drone movements." Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 2021.

    5.  Duncan, Brittany A., et al. "Investigation of communicative flight paths for small unmanned aerial systems." 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.

    6. Firestone, Justin W., Rubi Quiñones, and Brittany A. Duncan. "Learning from users: an elicitation study and taxonomy for communicating small unmanned aerial system states through gestures." 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019.

    7. Desai, Ruta, et al. "Geppetto: Enabling semantic design of expressive robot behaviors." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019.

    8. Soon to be published manuscripts will be made available later

    Supervisors:
    Dennis Reidsma: d.reidsma@utwente.nl
    Aswin Balasubramaniam: a.balasubramaniam@utwente.nl

  • Designing Post-Run Reflective Experiences To Support Learning of Movement Through a Dashboard Interface @ HMI - Enschede, NL

    Introduction:
    In our previous studies, we discovered that runners appreciated the availability of drone-captured run videos as a means to facilitate self-reflection. Reflecting on sports activities through the use of video has proven to be a highly effective method for athletes to enhance their performance across various sports [1-4]. With the recent technological advancements in drone technology and video processing, we now have the capability to capture videos of runners and extract essential running parameters from these videos without encumbering the athletes. However, they also expressed a desire for these videos to offer insights into their performance, including the identification of moments of error during their runs and guidance on how to improve. To address this, they proposed the integration of such information into a dashboard interface.

    Traditionally, dashboards are GUIs designed to present critical information within extensive datasets. Recent developments in big data handling have transformed dashboards into interfaces composed of various charts and numerical representations. This design choice also extends to applications related to running, where the focus is on presenting extensive time series data through simplified graphs and visuals [5]. In the realm of sports data, these numerical insights often hold significant meaning, yet presenting this information in a manner that effectively conveys its significance remains an underexplored avenue, offering a unique opportunity for research. This serves as the foundation of the project—an exploration of an “unconventional” dashboard, departing from prevailing trends. The aim is to create a dashboard that effectively communicates the underlying significance of running data through videos, enriching the post-run reflection experience for runners with a focus on promoting motor skill learning.

    While not utilizing a dashboard interface, a compelling source of inspiration for students considering this project is 'Illuminate' [6]. This installation serves the purpose of aiding individuals in visualizing their movements through real-time visuals, rendering visible what remains 'invisible' to the observer. The overarching goal was to offer users a visceral experience, one that deeply resonates with their inner senses, to push the boundaries of their sensory perception. Similarly, the objective of this project aligns with this concept, aiming to craft an experience that evokes profound emotions, ultimately assisting runners in comprehending and enhancing their runs. In pursuit of these objectives, students will be expected to apply somaesthetic design methods and draw from various theories pertaining to motor learning.

    Somaesthetic design compels designers to consider the bodily experiences, sensations, and feelings of the relevant users while developing their concepts [7]. To gain context and inspiration for utilizing somaesthetic design, students can examine existing work in this field [8]. Additionally, students can tap into various pedagogical approaches, such as self-modelling, as well as theoretical perspectives on motor learning to inform their design decisions [9]. Two prevailing theories in motor learning are the representational and anti-representational theories. The representational theory posits that motor learning is internalized, akin to forming a mental model, where athletes break down their movements into smaller components and train them individually. Conversely, the anti-representational theory suggests that athletes contemplate all the possible actions they can undertake to support their motor learning. Running typically unfolds as a fast-paced experience, often reaching a point where everything around the runner seems to slow down. Incorporating somaesthetic principles into the design process can help translate the runners' intricate feelings and sensations into meaningful representations, thereby offering valuable insights for the design process. Runners have also noted that when they are in the midst of a run, they often enter a meditative state, wherein their mental focus turns inward. Designers can leverage this aspect of the running experience through somaesthetic methods, enhancing the overall design process.

    Goal:
    The aim of this project is to develop an interactive system that enables runners to gain insights into their movement through visual presentations on a dashboard. This system will be designed with the primary goal of enhancing the running experience and improving performance, supported by drone-captured videos. As the central focus here is on self-learning through visual aids, runners should have the ability to discern the visuals effectively. To drive this interactive experience, running data processed using drone videos will be used to help runners become more attuned to the actions they need to perform (education of intention) or to provide information that guides their running behaviour (education of attention).

    Student Tasks: (Suggestions in no particular order)

    1) Literature Review: Conduct a literature review on topics related to theories of motor learning, self-modelling, self-reflection through videos, soma design, key performance indicators of running, and design methods.

    2) User Studies: Utilize the gained knowledge to empathize with runners. Gain a comprehensive understanding of their learning process, the nuances of adapting their running technique, their sensory experiences during this process, and other relevant information.

    3) Lo-Fi Prototype: Employ appropriate design techniques to create a lo-fi drawing from insights obtained during the literature review and user studies.

    4) Develop Hi-Fi Prototype: Develop an interactive experience though a dashboard medium building upon the foundations laid by the low-fidelity prototyping stage.

    5) Assess Users Experiences using the Prototype: Identify relevant metrics and formulate questions related to self-reflection, self-modelling, and motor learning. Evaluate runners' experiences while utilizing the dashboard, gathering valuable insights.

    References/Reading Materials:

    1. Rhoads, Michael C., et al. "A meta-analysis of visual feedback for motor learning." Athletic insight 6.1 (2014): 17.

    2. Rymal, Amanda M., Rose Martini, and Diane M. Ste-Marie. "Self-regulatory processes employed during self-modeling: A qualitative analysis." The Sport Psychologist 24.1 (2010): 1-15.

    3. Groom, Ryan, and Lee Nelson. "The Application of Video-Based Performance Analysis in the Coaching Process 1: The coach supporting athlete learning." Routledge handbook of sports coaching. Routledge, 2013. 96-107.

    4. Trudel, P., and W. Gilbert. "Learning to coach through experience: Reflection in model youth sport coaches." Journal of teaching in physical education 21 (2001): 16-34.

    5. Bumblauskas, Daniel, et al. "Big data analytics: transforming data to action." Business Process Management Journal 23.3 (2017): 703-720.

    6. https://www.media.mit.edu/projects/illuminate/overview/

    7. https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/somaesthetics

    8. Hendriks, Sjoerd, et al. "Azalea: Co-experience in remote dialog through diminished reality and somaesthetic interaction design." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.

    9. Postma, Dees BW, et al. "A Design Space of Sports Interaction Technology." Foundations and Trends® in Human–Computer Interaction 15.2-3 (2022): 132-316.

    10. Soon to be published manuscripts will be made available later

    Supervisors:
    Dennis Reidsma: d.reidsma@utwente.nl
    Aswin Balasubramaniam: a.balasubramaniam@utwente.nl

  • Improving indoor navigation via multi-modal smartphone signals @ HMI - Enschede, NL

    Background: In our increasingly digitized and interconnected world, the utilization of various modalities for guiding individuals and alerting them to potential dangers is of paramount importance. This project aims to explore the effect of signal modality on navigation and warning signal responses in a Virtual Reality (VR) environment. As part of this project, you will investigate the comparative efficacy of tactile, auditory, and visual signals in guiding user behavior within a VR setting. Tactile signals, presented directly to individuals, are theorized to be a potentially more efficient means of guiding people, particularly in indoor environments, compared to traditional static visual signage, dynamic lighting, or auditory cues. Visual signage limitations arise from their inability to adapt to specific individuals or dynamic situations. Auditory signals, while effective in signaling danger or providing directional information, may lead to confusion and sensory overload when multiple signals are present.

    Project Needs: To efficiently investigate the possible viability of (tactile) guidance via smartphones, we require the development of a cost-effective, handheld "phone" mockup that can be tracked within the VR environment (or on-screen as long as compatibility with Unity is ensured). These mockups should incorporate vibrators, LEDs and possibly speakers emitting sinusoidal noise to enable the presentation of visual and auditory signals for comparison.

    Research Questions and Problem Statement:  In addition to addressing the technical challenges of the development, the study will encompass a user-focused investigation. Key research questions include:

    • How can we design tactile signals that are distinguishable from each other?
    • Does tactile feedback outperform other modalities in terms of reaction speed and user comprehension?
    • What is the optimal number of tactile vibrations that can be effectively distinguished by users (e.g., signaling actions like going up/down stairs, stopping, turning left/right, akin to Morse code)?
    • Can an intuitive tactile vibration pattern be established for common directional cues (e.g., right)?

    If you are interested in one of these (or similar) topics, please contact:
    Marcello A. Gómez-Maureira, m.a.gomezmaureira@utwente.nl
    This project will be carried out in collaboration with Maximilian A. Friehs.

  • Human-Robot Interaction: graduation topics related to human-robot interaction @ HMI - Enschede, NL

    Children’s Trust in Robot Speech
    Children struggle to communicate their information need when searching for information. The available systems (Google’s search engine, or voice assistants like Siri and Alexa) do not support the child in the process, but rather take one statement and provide search results in return. Chatters is creating a conversational agent that allows for multiple turns in the interaction to be able to find more relevant information. However, using a robot to search for information on the internet could also be hazardous, since children tend to trust robots and we do not know how this trust influences children’s perception of the provided information. Therefore, we aim to monitor the trust real-time during the interaction. When the robot notices that the trust is high, it should try to evoke a critical attitude in the child.

    Contact person:
    Khiet Truong
    k.p.truong@utwente.nl

    Spoken Interaction with Conversational Agents and Robots
    Speech technology for conversational agents and robots has taken a flight (e.g., Siri, Alexa), but we are not quite there yet. While there are technical challenges to address (e.g., how can an agent display listening behavior such as backchannels "uh-uhm", how can we recognize a user's stance/attitude/intent, how can we express intent without using words, how can an agent build rapport with a user), there are also more human-centered questions such as how to design such a spoken conversational interaction, how do people actually talk to an agent or robot, what effect does a certain agent/robot behavior (e.g., robot voice, appearance) have on a child’s perception and behavior in a specific context?

    These are some examples of research questions we are interested in. Much more is possible.

    Contact person:
    Khiet Truong
    k.p.truong@utwente.nl

    Teleoperation of Robots
    Teleoperated robots enable humans to be remotely present somewhere else in the world to perform (maintenance) tasks or be socially present. This has many applications and benefits, as operators can apply their expertise without the need to transport themselves over to a possibly remote or dangerous environment. If there are time delays in teleoperated systems, they become very difficult to use. In this project, you will work on making the time delays in these systems less noticeable through machine learning and/or through modelling the behaviour of the operator. This might for example involve computer vision/image segmentation but also doing user studies to determine how humans manipulate objects and how we visually orient ourselves in VR/remote environments.

    Contact person:
    Luc Schoot Uiterkamp
    l.schootuiterkamp@utwente.nl

    People Predicting Robot Behaviour
    Predictability in human-robot interactions is essential for humans to understand, for coordinating actions with robots, and improving task performance, safety, and trust in the robot. Moreover, it influences our social perception of the robot. When the interaction pattern is designed beforehand and is fixed, we can design it to be highly predictable. For instance, by having a few types of robot actions without variations. However, when the interaction pattern is not predetermined, unpredictable robot actions are more likely to happen. For instance, when robot motion trajectories are generated through kinematics. When people collaborating with the robot are unable to predict the robot's behaviour, they might no longer trust the robot or safely coordinate their actions with it. Example projects on this topic include developing novel strategies or behaviours to mitigate unpredicted robot behaviour, developing a model that a robot can use to predict what the person interacting with the robot is predicting the robot will do, investigating the effects of unpredictability on human-robot interaction, or how updating and changing robot behaviour (making it more unpredictable) influences human-robot collaboration.

    Contact person:
    Bob Schadenberg
    b.r.schadenberg@utwente.nl

    Human-robot communication for hospital robots
    Human-robot interaction is a dynamic and diverse interdisciplinary field that brings together the knowledge and expertise of various disciplines, including psychology, engineering, design, sociology, and philosophy, to improve the social interactions between robots and humans. If you are interested in researching novel approaches to enhance the interaction between robots and humans in a healthcare environment, investigating the societal impacts of robotic technologies, or delving into innovative methods for designing robotic communication (using semantic free utterances, motions, or the effect of appearance), you can reach out to:

    Contact person:
    Hideki Garcia Goo
    h.garciagoo@utwente.nl

  • Measuring conversation quality in customer-agent interactions @ HMI - Enschede, NL

    Together with Merlinq https://www.merlinq.net/, we are investigating how to measure conversation quality in customer-agent interactions. What makes a conversation a good conversation?

    Previous research has found that interlocutors find a conversation enjoyable or good when they are "in sync", when there is rapport and when interlocutors show empathy. Our goal is to make these indicators measurable in customer-agent interactions by looking at verbal and nonverbal aspects in speech (i.e., word usage, prosody), as well as by investigating the potential of physiological measures. Monitoring conversation quality can be useful for contact centers that aim to maximize customer satisfaction, as well as for training purposes. When conversation quality can be measured real-time, an intelligent agent can advice the customer service agent to take certain actions to "be more in sync" with the customer.

    We are looking for students who are interested in making conversation quality measurable. Possible assignments are diverse and can involve data collection and/or feature engineering of vocal and physiological parameters with the aim to measure conversation quality.

    More information can also be found here (sorry, in Dutch):  

    https://www.conversatiekwaliteit.nl/

    Contact:
    Dr. Khiet Truong k.p.truong@utwente.nl,
    Dr. Arjan van Hessen a.j.vanhessen@utwente.nl

  • Whirrs and Words: Discovering Robot's Voices @ HMI - Enschede, NL

    What do robots currently sound like? Despite the existence of robot databases containing images and videos of commercial and research robots (https://www.abotdatabase.info/collectionhttps://robotsguide.com) none of these include the sounds that these robots use to communicate. Having this information can further the design of robot sounds. Your task will be to collect sounds and other robot characteristics of existing robots in research, media, and industry. Using this collection you will be looking at possible relations between the robot’s voice and some of the robot’s characteristics to inform robot voice design.

    If you are interested in this topic feel free to contact Hideki Garcia Goo (h.garciagoo@utwente.nl) or Khiet Truong (k.p.truong@utwente.nl).

  • Voice, Face, and Theater performance @ HMI - Enschede, NL

    Voice, Face, and Theater performance
    in connection to Jonathan Reus, Artist in Residence 2023

    Contact: info@JonathanReus.com, d.reidsma@utwente.nl

    How can a vocal performer, such as an opera singer or a spoken word artist, move fluidly from their biological voice to an artificial one? How can performers inhabit multiple vocal identities simultaneously, or become completely without a stable persona? How can wearable robotics, such as light-weight robotic masks, be used as part of theatrical costumes to further distort the identity of the performer? And how can all of these approaches lead to a sense of uncanny perception and sensory delight in the audience? I will be artist in residence at the UT throughout 2023 exploring these questions, with the goal of creating new live performances. We will be building expressive and performative artificial voice models, real-time voice manipulation systems, and wearable performance technologies such as sensor-skins for controlling voice, or robotic masks. In addition we will consider how to create voice datasets for artistic research into expressive, performative voice AI, and hopefully release an open vocal dataset ourselves as part of the residency.

    All of the project topics below are within my expertise and related to the work I will be doing throughout 2023, and I would be very happy to help with students who are interested in researching the same topics as part of a MSc or BSc project. All the topics are very broad and can be approached with a focus on whatever the student's background is. For example, a student with expertise in computer science might focus on developing an expressive and controllable artificial voice model, a student with knowledge of design methods might focus on how to design a wearable voice control interface for a singer, while another student might focus on studying the performer's perception of embodiment when using an artificial voice.

    You can find a bit more information about Jonathan online at jchai.me

    Detailed Project Topics
    Below are some potential projects for MSc and BSc students which could be guided with the help of Jonathan and contribute to his artist residency at UT through 2023.

    Expressive, Controllable AI Voice for the Arts
    Most work on voice synthesis focuses on intelligible speech, with the idea of "expressivity" meaning mostly the ability to speak in different emotions; however "emotion" is not enough for artistic use, the human voice can do so much more. This project would explore how to develop, or modify existing machine learning text-to-speech models to be able to reproduce a wider range of human vocal expression beyond simple emotional categories, for example, by focusing on control over paralinguistic parameters. This topic would be ideal for students with a strong background in machine learning and computer science who might want to develop models, who want to explore new ways of interacting with existing voice models, or who want to do research into the nature of non-verbal voice.

    Open Datasets for Artistic Voice Research
    This project is "the other side of the coin" to the one above: addressing how to create open voice datasets useful for training and testing artistic voice models. While there are a few publicly available speech datasets out there, they are oriented to tasks such as speech synthesis and recognition, while in artistic practice the human voice can do much more than simply speak. In fact, we don't yet have a good idea what would need to be in a voice dataset for artistic use! A student taking on this topic could try to answer this question - by creating a new dataset** that fits their hypothesis of what notations and vocalisations an expressive voice dataset should include. A student with a computer science background might also want to explore ways of repurposing existing voice datasets, for example by extracting artistically useful features such as pitch, articulation and energy.

    Designing Wearable Interfaces for Voice Performers
    Vocal performers are usually in motion, gesturing their hands while telling a story, moving fluidly across a stage, or standing in an ensemble. Standard electronic music controllers are usually devices that tie you to a table, and simply do not fit the mobility and nuance needed by a vocalist. This project will investigate the creation of wearable, lightweight and natural-feeling interfaces for electronic voice performance that explore ways in which sensory technologies can become an intimate part of the vocalist's performative expression. Techniques that could be explored include: wearable "sensor skins" that respond to nuances of finger movement and touch, or wearable "sensor masks" that allow the vocalist to control synthetic voice by touching or manipulating the face.

    Embodied Ownership of AI Voices
    From vocal deepfakes to real-time voice skins, artificial voice synthesis is at an uncanny state of believability, opening up a unique frontier for artistic works exploring vocal avatars, personas and embodied ownership of other voices. This research project would explore the perception of such artificial voice phenomenon from the perspective of audience and/or performer. Asking questions such as: What makes an artificial voice feel like it "belongs" to "me"? Or, when watching a performer working with artificial voice, for example through lip syncing or computer voice transformation, what makes it seem like that voice "belongs" to that performer? These are psychological, cognitive and perceptual questions that can be addressed in many ways. The student may also wish to investigate more emotional or poetic aspects of artificial voice ownership, such as imagining how the change in identity of embodying a new voice persona can empower individuals.

    Watermarks for Accountable Voice Data
    It has become the norm that spectacular, high-profile AI projects depend on data obtained under questionable circumstances from individuals: for example, through large-scale internet scraping, or repurposing the personal data of users. The attitude of these initiatives seems to echo a certain Silicon Valley ethics to "move fast and break things", or "ask forgiveness rather than ask permission". This research project involves addressing the lack of accountability and ethical data use in voice data from the ground up: by re-inventing the file formats used to encode voice data. The student will investigate ways of encoding the myriad wishes of voice data owners as an irremovable watermark in the data itself, using techniques like, for example, direct data-to-audio encoding and embedding schemes. The goal would be to create ways of embedding metadata describing fair-use and user wishes into voice recordings. Wishes that are extremely difficult, or impossible to remove from the audio itself.

    **Note: For all voice-related topics the student has the opportunity to collaborate with the conductor of the student choir Musilon, voice actors within the student theatrical community, and vocalists who are part of the artist residency in order to do their research or create their datasets. Part of creating datasets will also involve research into the steps that must be taken in order to ethically open source a dataset containing the voice recordings of individuals.

    Designing Wearable Robotics that Challenge Fixed Identities
    Masks have been used across human cultures for thousands of years as a way to inhabit (or be inhabited by) the identity of another. However, what if a mask was not made to assume a single identity, but multiple ones? This research project investigates the design of wearable masks that embody such an "unstable" identity. The masks themselves could be created in a number of ways: for example, through light-weight mechatronics, tensegrity-inspired mechanical mechanisms, soft robotics or as a form of costume design. The student may also want to focus on more perceptual research into the psychology of how humans perceive faces, and what rearrangements of facial structures lead to strange and uncanny experiences in an audience.

    Designing Real-time Facial Avatars / Facial Puppets
    This research project investigates development of real-time facial avatar technologies for use in the performing arts. The main question here is how to remotely control, or "puppeteer" mask-like creations such as those described in the previous topic. The performing arts have specific unique requirements for facial control. Tracking interfaces should be stable, lightweight, portable and even wearable. It may also be that the student is more interested in controlling virtual facial avatars (e.g. which could be projected during performances or appear in VR). Even if displayed on a screen, the projected avatars must be responsive and expressive, and be interoperable with different softwares used by creative practitioners in the digital arts.

  • A rich research environment for multi-person rowing in VR @ HMI - Enschede, NL

    Interaction Technology has great potential for sports training and performance. For example, the Wavelight technology supports runners in matches; VR can be used to train in rowing or in soccer; amateur runners often make use of smart watches and sports trackers; and systems such as the FitLight are used for reaction training in various sports. In this project we work with the combination of Virtual Reality, rowing on ergometers, and various sensors to measure the rower's actions and performance.

    Context

    Recent studies in sports HCI illustrated that athletes and coaches use (and are open to) further use of virtual reality (VR) in training. The advantages of using VR in sports training can be immense, especially in skill development and coaching as it can simulate real life environments while being perfectly adaptable and systematically configurable. The proposed assignment is part of the "Rowing Reimagined" project, jointly carried out by the UT and the VU, in which a research platform is developed for multi-person rowing in VR using ergometers. On the one hand, this platform aims to offer a diversity of VR environments, tasks, and feedback for novel forms of training. On the other hand, the versatile setup can be used to systematically do fundamental research into the conditions and determinants of performance in rowing.

    For example, by introducing an opponent boat into the virtual reality, and systematically varying the parameters of its "overtaking behavior", we can fundamentally research the effects of stressors on rowing performance of the rowing team, or we can use the same opponent boat models for offering a novel settings in which to train athletes how to cope with such stressors. An unlimited amount of other variables could similarly be explored -- a more complete list can be found at the end of this assignment proposal.

    =This assignment=

    The proposed assignment focuses on realising the richest possible research environment for multi-person rowing in VR, by exploring, developing, and pilot testing multiple features in the platform (such as the above mentioned opponent boat) that can contribute to novel training technology or to fundamental research. This requires an iterative approach. For example, it is not enough to simply state that "an opponent boat model must be added to enable reseach into the role of stressors in rowing performance". What exactly should be the speed of overtaking? What gives the most realistically stressing effect on the athlete? At what distance of the athlete's boat does the stressor effect take place? Is this individually determined? Do we need a calibration phase to personalise the overtaking behaviour of the opponent boat to the current athlete using the system?

    Such questions must be explored on the basis of literature and expert input, and a specific design of the platform feature should iteratively be developed to see how it works out in practice. In the assignment, multiple platform features will be addressed one by one to make each fitting enough to contribute to future science. After a brief pilot study to show the potential of the new feature, a next feature will be taken up and worked out, based on a grounded idea of what feature will be useful for relevant research with the platform.

    =The current platform=

    The assignment starts with our existing platform which is still being extended. The current platform consists of a technological setup with two ergometers, a social VR setup in which two rowers can be virtually present in a single boat, a few initial measurement components to gather information about the power / effort of the rowers, and some initial virtual elements in the environment. Each of these components may be enhanced and extended as part of the assignment, or completely new components can be added.

    Inventory of some facets that may be chosen as part of this assignment

    - The "experience of realism" of the resulting rowing activity. Obviously, rowing on an ergometer in VR is not really the same as rowing in a boat on water. This "unreality" might have impact on the athletes’ transition of the improved skills from virtual to reality. What features would intuitively be considered important for the experience of realism? What do rowers feel is important for the experience of realism? When is a certain feature considered minimally adequate regarding realism? This may consider environmental factors, the movement of the boat, the representation of the other athlete in the setup, the sound, etcetera etcetera. 

    - Rich opponent behaviours that are realistic and meaningful, and that potentially impact the objective or subjective rowing activity. 

    - Measuring the sense of stress and tension during the rowing activity through a combination of objective (biophysiological?) sensors and in-action momentary self assessment. 

    - Measuring power, effort, and exertion (objectively or subjectively; posthoc or in-action).

    - Measuring and influencing sociality and perceived togetherness in rowing. In the future we would like to experiment with synchrony and the perception of togetherness in rowing. But how can we measure objective and subjective togetherness, eg in the form of social unity or flow, or objective measures of synchronisation? What features may contribute to, or detract from these measures? 

    - Multiple feedback mechanisms about the joint rowing and their possible parameters. Can be in many modalities: sound, haptic, etcetera. 

    - And many more.

    Contact:
    Dees Postma (d.b.w.postma@utwente.nl)
    Dennis Reidsma (d.reidsma@utwente.nl)
    Armağan Karahanoğlu (a.karahanoglu@utwente.nl)
    Robby van Delden (r.w.vandelden@utwente.nl)

  • Beep beep boop: Semantic-Free Utterances for Social Agents @ HMI - Enschede, NL

    In the field of human-robot interaction, the development of semantic-free utterances (SFU) has been gathering attention.R2D2 from Star Wars is a good example of an agent who uses SFU to communicate, another example is how Sims communicate in the videogame by the same name. Some advantages of using SFU over natural language are: the expected intelligence of the system decreases to a more realistic one, it could be more widely understood (not bound to any language), and the user becomes the intelligent other in the interaction (less weight on the robot's ability to process information). There are several topics that we can address in this area, but some research directions that we are interested in are:

    - The creation of SFU along with performers (improv, opera, theatre groups, DJs in UT). How does it compare to existing ones? How would the development of non-semantic speech change depending on who is designing it? How would an opera singer vocalize the emotions they need to convey? How would it be different from how an improv actor does it? 

    - While the point of using non-semantic speech in multicultural spaces is mentioned as an advantage, few studies have tested this with participants of different cultural backgrounds. How does culture play a part in how well we understand SFU? Are there any differences on how people of different cultural backgrounds design SFU? Does exposure to western culture influence how well we understand SFU (created in the Global North)?

    - The design of gender-neutral robots has also gained more traction, but gender-neutral voices that are perceived as such are difficult to design. Are SFU more easily perceived as gender neutral in comparison with natural language? Which type of SFU is more fit to convey gender neutrality? Will the addition of a robot embodiment change this perception?

    If you are interested in one of these (or similar) topics feel free to contact Hideki Garcia Goo (h.garciagoo@utwente.nl) or Khiet Truong (k.p.truong@utwente.nl).

  • Accessing large-scale knowledge graphs via conversations in virtual environments @ HMI - Enschede, NL

    In the past decades, more and more Cultural Heritage institutions, such as libraries, museums, galleries and archives, have launched large-scale digitisation processes that result in massive digital collections. This not only ensures the long-term preservation of cultural artefacts in their digital form, but also allows online instant access to the resources that otherwise require physical presence and foster the development of applications like virtual exhibitions and online museums. By embracing Linked Open Data (LOD) principles and Knowledge Graph (KG) technologies, rich legacy knowledge in these CH collections has transformed into a form that is shareable, extensible and easily re-usable. Relevant entities (e.g. people, places, events), their attributes and their relationships are formally represented using international standards, resulting in knowledge graphs that both humans and machines are able to understand and reason about. 

    However, exploring large-scale knowledge graphs is not trivial. Traditional keywords based search is not ideal for exploring such graph-structured data. A museum visitor may start his/her exploration from certain creative work to different types of entities that it associates with, such as its creator, the place where it was created, or relevant events that happened at the time when it was created, etc. The visitor can follow the links in the knowledge graph to discover what is interesting to them. Thanks to the LOD principles, some external knowledge graphs may also be briefly accessed. What is challenging is how to make this huge amount of information accessible to visitors in an appealing and intuitive manner, so that the interaction between visitors and the knowledge graphs becomes meaningful and enjoyable. Such interaction needs to take into account the visitors' cognitive and information processing capabilities as well as their personal interests and cultural backgrounds.  

    At HMI, we are investigating how to access large-scale KGs via natural language conversations in virtual environments and welcome students to research on different aspects of this research:

    • Developing a KG-based conversational museum guide that models user's interests, introduces art objects, answers questions, provides recommendations, etc.
    • User interest modelling Conversational KG-based question answering and recommendations Natural language generation with dynamic subgraph extraction Mixed initiative dialog management with KGs
    • Integrating conversational agent in a virtual reality environment
    • Multi-model input for user interest detection, including the user's utterances, eye-gaze, speech emotion, gesture, etc. Multi-model responses in virtual reality, including text, voice, highlights, etc.
    • Effective and ethical design in collaboration with Humanities researchers and/or Cultural Heritage institutions.
    • Effective or inappropriate: Using visitors’ (cultural?) background / profile to generate personalised narratives Methods for visitor-agent interaction to allow for collaborative creation of narratives about cultural history → this could include research on community-driven artifact labelling / label correction, object selection etc. Research on increasing affective impact of interactive virtual environments (for cultural heritage), with the goal of 
      • increasing visitors’ a) knowledge, b) sense of responsibility, c) sustained interest d) other positive outcome on topic of inclusivity
      • Increasing interest and participation of different groups of people that may not typically feel attracted or included in museum exhibitions - e.g. children

    Contact: Shenghui Wang (shenghui.wang@utwente.nl)

  • Can I touch you online? - Investigating the interactive touch experience of the “Touch my Touch” art installation @ HMI - Enschede, NL

    We touch the screen of our cell phone more often than we touch our friends. We stroke and 'swipe' our screen in search of a loved one. Meanwhile, in public spaces, we touch each other, and watch each other being touched less and less. Pandemic regulations have only further increased this physical isolation. “Touch my touch” or TouchMyTouch.net, designed by artist duo Lancel and Maat, is a critical new composition of face recognition, merging and streaming technologies for a poetic encounter through touching and being touched. TouchMyTouch.net is a Streaming Platform for Online Touch and Interactive installation for the streaming platform. The interactive installation will be at the UT for a number of weeks during the second semester of 2022. You can try the online platform with a partner here: https://upprojects.com/projects/touch-my-touch

    This master assignment is in relation to the physical “touch my touch” installation. You will define a research question in relation to the touch experience evoked by the art installation. For more information send an email to Judith Weda (j.weda@utwente.nl).

  • Coaching for breathing with haptic stimulation @ HMI - Enschede, NL

    Breathing has a fundamental contribution to well-being, both physiologically and psychologically. Accordingly, also a number of techniques for well-being build on breathing-techniques, like yoga, TaiChi, meditation, Wim Hof method, and many more. There are also a number of technological products available that offer support for breathing, such as the Spire or apps in smart watches.

    With this master project we want to explore the possibilities to support breathing by haptic stimulation and feedback. Stimulation can be used to teach breathing patterns, feedback to signal whether breathing is or is not in the intended range. For this work we will focus on vibration motors for haptic stimulation. Relevant questions to answer here are: where to position vibration motors, what stimulation and feedback patterns are comfortable and effective?

    Contact: Angelika Mader, a.h.mader@utwente.nl

    Supervisors: Angelika Mader, Geke Ludden

  • Mediated social touch through a physical avatar and a wearable @ HMI - Enschede, NL

    The aim of this master project is to develop a fully functioning social haptic device divided in two main components, a haptic coating for a humanoid robot serving as an avatar and a wearable haptic vest/suit. This project’s workload will be divided in four parts: (1) the design and development of a social touch coat for the robot body, (2) the selection or development of haptic sensors to be dispatched on the avatar’s upper body, (3) designing and developing a pneumatic haptic vest that can be used for both perception experiments and mediated touch and (4) testing and experimenting with a TacticSuit haptic vest (from BHaptics [vibration]). You will find more information on each part below. We intend to have one student working on each part. We support and encourage collaboration between students within the project.

    Xprize
    The ANA Avatar Xprize is an International Competition where different teams design and develop their own fully functioning avatar and test it in different scenarios from maintenance tasks to human-human remote interactions. HMI participates in team i-Botics that qualified for the semifinals in September 2021. During this competition, we develop our avatar’s ability to allow advanced social remote interactions between the remote controller of the avatar and the recipient at the avatar location. To that end, this master project has been designed to focus on one important aspect of social interaction, which is touch.

    You can find more information on https://avatar.xprize.org/prizes/avatar

    WEAFING
    The WEAFING project is an EU Horizons 2020 project that aims to develop a wearable for social touch made out of electroactive textile. Electroactive textile is a textile woven or knitted with electroactive yarn. This yarn contracts or expand when an electrical current is applied. Depending on the morphology of the textile we can imagine different types of haptic sensations on the skin. The current interest is the pressure sensation that the garment could generate.

    At the UT we do perception research which is key in order to define the specifications of the electroactive textile. Since the textile is still in development we use substitute materials to explore and find the perception parameters of pressure applied to different parts of the body by psychophysical studies.

    You can find more information on weafing.eu

    Part 1 : Designing a social touch skin for a humanoid robot avatar

    This part of the project concerns the design, production and test of a humanoid robot avatar’s “skin” to be used during social touch interaction between the avatar (piloted by a controller) and a recipient who is actually touching the robot. For this part, we expect the student to realize a study on different materials and sensors that could be used for the coating, to design the required product and to make a test with a physical robot.

    There are no clear limitations on the selection of the material. However, some criteria of measurement will be clearly emphasized during the project. We will offer help and collaboration in the search for an adequate material. For the sensors, the selected method should be dispatched on the whole upper body of the robot and should be as less intrusive as possible. There may be multiple way to approach this task. We expect the student to find a viable and efficient solution considering the limitations that will be provided during the project such as weight, shape or size of the sensors. Some ways may also be available at the HMI department such as the CapSense vest as a start for the investigation.

    We are looking for master students in interaction technology and/or embedded systems. Help will be provided with sewing and designing the “skin”. However, we will consider experiences with sewing, haptic interaction and sensor’s data analysis as a plus.

    For more information, please contact Camille Sallaberry, c.sallaberry@utwente.nl

    Part 2 : Developing a pneumatic haptic vest for human-human remote touch interaction and psychophysical experiments

    For this graduation assignment we are looking for a student to create a haptic (touch) vest with pneumatic actuators. The goal is to use the vest for both psychophysical experiments and mediated touch. The assignment would be a collaboration between two projects namely the WEAFING (weafing.eu) project and the UT entry for the X-prize.

    In the weafing project we are developing a textile wearable that can give haptic feedback. In order to do this we need to do perception experiments for pressure on the skin using psychophysical methods to find the parameters of touch perception. A pneumatic vest can help us with these experiments.

    Following these experiments we can use the vest for mediated touch applications. This includes mediated social touch and other mediated touch. Touch can for example be mediated through an avatar representing you at an alternative location, the use case of the X-prize project.

    There are multiple ways to approach the assignment and multiple actuator option like McKibben muscles or silicone pockets. It is key for psychophysical experiments to measure pressure in the actuator and have precise control of pressure in the actuator. The vest should fit both men and women and some variation of body types.

    We are looking for master students in interaction technology or embedded systems. We will offer help with sewing and the vest design, but it is key that you have affinity with making. Experience with sewing is a plus.

    There is a body of work as a base, namely a project on pneumatic actuator control, a sleeve with McKibben muscles, and silicon actuators.

    For more information please contact Judith Weda, j.weda@utwente.nl

    Part 3 : Vibratory Suit for human-human remote touch interaction

    For this part of the project, we are looking for a student that will investigate the use and flexibility of vibratory motors to reproduce different kind of social contact/touch during social communication in the context of an interaction between a robot-avatar and a human. The student will also be expected to evaluate existing vibratory haptic suits or to develop one for remote social touch experience.

    The experience will require the availability to touch the whole upper body of the avatar with exception to the hands. Thus, we are expecting the haptic suit to be designed with only the top and long sleeves. Some criteria of measurement will also be clearly emphasized during the project.

    As one of our basic consideration, the student may start to evaluate with the TacticSuit from BHaptics. The aim of the student should be to test the haptic vest/suit for social touch and determine its usability compared to other possible suits.  

    During the project, we also encourage the student to tightly collaborate with the student on the “pneumatic vest” project as both students may have to evaluate the social touch experience of both products.

    We are looking for master students in interaction technology or embedded systems. We will consider experience with vibratory actuators and experience in social haptics as a plus.

    For more information, please contact Camille Sallaberry, c.sallaberry@utwente.nl

  • Spoken Interaction with Conversational Agents and Robots @ HMI - Enschede, NL

    Speech technology for conversational agents and robots has taken a flight (e.g., Siri, Alexa), but we are not quite there yet. While there are technical challenges to address (e.g., how can an agent display listening behavior such as backchannels "uh-uhm", how can we recognize a user's stance/attitude/intent, how can we express intent without using words, how can an agent build rapport with a user), there are also more human-centered questions such as how to design such a spoken conversational interaction, how do people actually talk to an agent or robot, what effect does a certain agent/robot behavior (e.g., robot voice, appearance) have on a child’s perception and behavior in a specific context?

    These are some examples of research questions we are interested in. Much more is possible. Are you also interested? Khiet Truong and Ella Velner can tell you more.

     Contact: Khiet Truong, k.p.truong@utwente.nl

  • Automatic Laughter analysis in Human-Computer Interaction @ HMI - Enschede, NL

    Laughter analysis is currently a hot topic in Human-Computer Interaction. Computer scientists generally study how humans communicate through laughter and how this can be implemented in Automatic Laughter Detection and Automatic Laughter Synthesis. Development of such tools would be very helpful in fields like Human-Computer/Robot Interaction, where voice assistants like Alexa and Google assistant might understand more complex natural communication through the interpretation of social signals such as laughter and generating well timed and appropriate realistic laughter responses. Another application could be multi-media laughter retrieval, automatically extracting laughter occurrences from large amounts of video and audio data, opening the way for retrieving large laughter datasets. As a final example, laughter detection could also possibly be used to study group behavior or automatic person identification.

    However there are several challenges in laughter research that need to be considered when aiming for automatic laughter analysis. For one, annotating laughter is a much discussed challenge for the field, there are debates on how laughter should be segmented and labeled. Are there different kinds of laughs for different situations and how do we label these? Do people have specific laughter profiles? How does context play a role in laughter detection? How could a real-time implementation in a conversational agent look like and for what purpose? Students can choose to go for a more human-centered or technology oriented direction.

    This makes achieving automatic laugher analysis an interesting goal. Students are invited to explore the topic of Automatic Laughter analysis and come up with an interesting question or challenge they want to address. You will be supervised by assistant professor Khiet P. Truong, who is an expert in laughter research and SSP and PhD student Michel-Pierre Jansen whose PhD work evolves around human laughter recognition and SSP.

     Contact: Khiet Truong, k.p.truong@utwente.nl

  • Analysis of depression in speech @ HMI - Enschede, NL in cooperation with GGNet Apeldoorn, NL

    The NESDO research (Nederlandse Studie naar Depressie bij Ouderen) was a large longitudinal Dutch study into depression in older adults (>60yr old). Older adults with and without depression were followed over a period of six years. Measurements included questionnaires, a medical examination, cognitive tests and information was gathered about mental health outcomes, demographic, psychosocial and cognitive determinants. Some of these measurements were taken in face-to-face assessments. After baseline measurement, face-to-face assessments were held after 2 and 6 years.

    Currently, we have a few audio recordings available from the 6-yrs measurement from depressed and non-depressed older persons. We are looking for a student (who preferably has knowledge of Dutch) who is interested in performing speech analyses on these recordings with the eventual goal to detect depression automatically in speech.

    This work will be carried out in collaboration with Dr. Paul Naarding, GGNet Apeldoorn.

    Contact: Khiet Truong, k.p.truong@utwente.nl

    Reading material:
    - https://nesdo.onderzoek.io/

    - https://nesdo.onderzoek.io/wp-content/uploads/2016/08/Comijs-et-al-2011_design-NESDO_incl-erratum.pdf

    - Cummins, N., Scherer, S., Krajewski, J., Schnieder, S., Epps, J., & Quatieri, T. F. (2015). A review of depression and suicide risk assessment using speech analysis. Speech Communication, 71, 10-49.

    - Low, D. M., Bentley, K. H., & Ghosh, S. S. (2020). Automated assessment of psychiatric disorders using speech: A systematic review. Laryngoscope Investigative Otolaryngology, 5(1), 96-116.

    - Cummins, N., Matcham, F., Klapper, J., & Schuller, B. (2020). Artificial intelligence to aid the detection of mood disorders. In Artificial Intelligence in Precision Health (pp. 231-255). Academic Press.

  • Master’s Assignments: Design of Smart Objects for Hand Rehabilitation after Stroke @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL

    Stroke impacts many people and is one of the leading causes of death and disability worldwide [1]–[3] and in the Netherlands [4], [5]. The predicted acceleration of the ageing population is expected to raise the absolute numbers of stroke survivors that need care [7]. 80% of all stroke patients suffers from function loss and needs professional caregivers [8], [9] and experiences lower quality of life due to their limited ability to participate in social activities, work and engage in daily activities [10], [11].

    The hand is the highly functional endpoint of the human arm as it enables a vast variety of daily activities related to high quality of life [12]. Only 12% of stroke patients recovers arm and hand function in the first 6 months [13]. For the remaining, the limited ability to use their hand leads to financial and psychological impact on them and their families, as it limits the execution of daily activities [14]. A treatment with substantial evidence for its effectiveness is CIMT (Constrained Induced Movement Therapy) [15]. CIMT usually employs intensive sessions focused on task-specific exercises, combined with constraining the unaffected hand and forcing patients to use their affected hand. CIMT relies on the principle of ‘use it or lose it’ [16] and requires patients using their affected hand.

    So far, attempts in creating effective home training methods focus on the direct translation of clinical exercises to home training, by designing them to be executed regardless of the location of the patient [17]. Monitoring with the use of smart objects [18]–[21] accounts for the lack of direct supervision and gaming and virtual reality elements have been added to make training more challenging [22]. Such methods assume that patients are motivated, able and willing to clear time in their schedule to engage in training, and/or to sit down at a specific location in their house to execute it. We need a new method to apply this principle in a more flexible way by engaging people in clinically meaningful activities in their daily routine. This way, patients will seamlessly perform functional training activities at a much higher dose than can be achieved in clinics.

    Our key objective is to develop a new method using smart objects in which training exercises will be seamlessly integrated into the daily routine of a patient at home.

    This method will aim to use the performance on these activities as a functional training set over the day, leading to improved hand function and therefore motivation to perform the activities again in the future [23]–[25]. Patients will not have to schedule their training, but the exercise will be part of their regular daily activities. We will do this by investigating a way of transferring clinical exercises to a home setting using smart objects. Smart objects can be integrated into the daily activities of patients and trigger (by design) a certain user behaviour. The focus in our proposal is for these objects to go beyond simply monitoring [18]–[21], and create a stimulating environment where people feel invited to train and intrinsically motivated to perform the task again in the future. Think of a smart toothbrush, that is designed to promote the use of the affected hand and enables operation only when used by this hand! Fundamental research into the transferability of clinical hand rehabilitation to a smart object home-based setting is needed to theoretically underpin our method. Using smart objects and artificial intelligence, personalized health will be more accessible, and the plurality of data will allow future clinicians more flexibility and overall control of the rehabilitation process.

    In this assignment, the masters’ student is expected to:

    1. Review literature on existing technologies (sensors, actuators, AI, etc.) of smart objects for rehabilitation to identify gaps/opportunities

    2. Specify the requirements for design of smart daily objects that can drive seamless rehabilitation with the use of technology

    3. Design and validate a product concept in a co-design manner including clinicians, users and developers

    What do we offer?

    We offer an interdisciplinary network of researchers who among others are experienced in hand rehabilitation and rehabilitation technology (dr. ir. Kostas Nizamis-DPM), Artificial Intelligence, smart technology and stroke rehabilitation (dr. ir. Juliet A.M. Haarman-HMI), and additionally behaviour change and design research (dr. Armağan Karahanoğlu-IxD). Additionally, the student will collaborate close with clinicians from Roessingh Research & Development (RRD) that aspire to be the end users of the product.

    Bibliography

    [1] S. S. Virani et al., “Heart Disease and Stroke Statistics—2020 Update,” Circulation, vol. 141, no. 9, Mar. 2020.

    [2] S. Sennfält, B. Norrving, J. Petersson, and T. Ullberg, “Long-Term Survival and Function After Stroke,” Stroke, 2019.

    [3] E. R. Coleman et al., “Early Rehabilitation After Stroke: a Narrative Review,” Current Atherosclerosis Reports. 2017.

    [4] “StatLine.” [Online]. Available: https://opendata.cbs.nl/statline/#/CBS/en/. [Accessed: 21-Apr-2020].

    [5] C. M. Koolhaas et al., “Physical activity and cause-specific mortality: The Rotterdam study,” Int. J. Epidemiol., 2018.

    [6] R. Waziry et al., “Time Trends in Survival Following First Hemorrhagic or Ischemic Stroke Between 1991 and 2015 in the Rotterdam Study,” Stroke, 2020.

    [7] A. G. Thrift et al., “Global stroke statistics,” International Journal of Stroke. 2017.

    [8] W. Pont et al., “Caregiver burden after stroke: changes over time?,” Disabil. Rehabil., 2020.

    [9] P. Langhorne, F. Coupar, and A. Pollock, “Motor recovery after stroke: a systematic review,” The Lancet Neurology. 2009.

    [10] M. J. M. Ramos-Lima, I. de C. Brasileiro, T. L. de Lima, and P. Braga-Neto, “Quality of life after stroke: Impact of clinical and sociodemographic factors,” Clinics, 2018.

    [11] Q. Chen, C. Cao, L. Gong, and Y. Zhang, “Health related quality of life in stroke patients and risk factors associated with patients for return to work,” Medicine (Baltimore)., vol. 98, no. 16, p. e15130, Apr. 2019.

    [12] R. Morris and I. Q. Whishaw, “Arm and hand movement: Current knowledge and future perspective,” Frontiers in Neurology, vol. 6, no. FEB, 2015.

    [13] G. Kwakkel, B. J. Kollen, J. V. Van der Grond, and A. J. H. Prevo, “Probability of regaining dexterity in the flaccid upper limb: Impact of severity of paresis and time since onset in acute stroke,” Stroke, 2003.

    [14] J. E. Harris and J. J. Eng, “Paretic Upper-Limb Strength Best Explains Arm Activity in People With Stroke,” Phys. Ther., 2007.

    [15] G. Kwakkel, J. M. Veerbeek, E. E. H. van Wegen, and S. L. Wolf, “Constraint-induced movement therapy after stroke,” The Lancet Neurology. 2015.

    [16] Y. Hidaka, C. E. Han, S. L. Wolf, C. J. Winstein, and N. Schweighofer, “Use it and improve it or lose it: Interactions between arm function and use in humans post-stroke,” PLoS Comput. Biol., vol. 8, no. 2, 2012.

    [17] Y. Levanon, “The advantages and disadvantages of using high technology in hand rehabilitation,” Journal of Hand Therapy. 2013.

    [18] M. Bobin, M. Boukallel, M. Anastassova, M. Ammi, U. Paris-saclay, and F.- Orsay, “Smart objects for upper limb monitoring of stroke patients during rehabilitation sessions .,” no. August 2018, 2017.

    [19] M. Bobin, F. Bimbard, and M. Boukallel, “Smart Health SpECTRUM : Smart ECosystem for sTRoke patient ’ s Upper limbs Monitoring,” Smart Heal., vol. 13, p. 100066, 2019.

    [20] M. Bobin, H. Amroun, M. Boukalle, M. Anastassova, and M. A. Limsi-cnrs, “Smart Cup to Monitor Stroke Patients Activities during Everyday Life,” 2018 IEEE Int. Conf. Internet Things IEEE Green Comput. Commun. IEEE Cyber, Phys. Soc. Comput. IEEE Smart Data, pp. 189–195, 2018.

    [21] G. Yang, J. I. A. Deng, G. Pang, H. A. O. Zhang, and J. Li, “An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning,” IEEE J. Transl. Eng. Heal. Med., vol. 6, no. May, pp. 1–10, 2018.

    [22] L. Pesonen, L. Otieno, L. Ezema, and D. Benewaa, “Virtual Reality in rehabilitation : a user perspective,” pp. 1–8, 2017.

    [23] A. L. Van Ommeren et al., “The Effect of Prolonged Use of a Wearable Soft-Robotic Glove Post Stroke - A Proof-of-Principle,” in Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, 2018.

    [24] G. B. Prange-Lasonder, B. Radder, A. I. R. Kottink, A. Melendez-Calderon, J. H. Buurke, and J. S. Rietman, “Applying a soft-robotic glove as assistive device and training tool with games to support hand function after stroke: Preliminary results on feasibility and potential clinical impact,” in IEEE International Conference on Rehabilitation Robotics, 2017.

    [25] B. Radder, “The Wearable Hand Robot: Supporting Impaired Hand Function in Activities of Daily Living and Rehabilitation,” University of Twente, Enschede, 2018.

  • Supporting healthy eating @ HMI - Enschede, NL

    Contact: Juliet Haarman (HMI – j.a.m.haarman@utwente.nl), Roelof de Vries (BSS – r.a.j.devries@utwente.nl),

    Project Summary:
    Eating is more than the consumption of food. Eating is often a social activity. We sit together with friends, family, colleagues and fellow students, to connect, share and celebrate aspects of life. Sticking to a personal diet plan can be challenging in these situations. The social uncomfortableness that is associated with having a different diet than the rest of a group greatly contributes to this. Additionally, it is well known that we unconsciously influence each other while we eat. Not just in the type of food that we choose, even the quantity of the food that we consume, or the speed with which we consume the food is affected by our eating partners.

    A variety of assignments is available that focuses on this topic. They are specified below.

    The interactive dining table
    The interactive dining table is created to open up the concept of healthy eating in a social context: where individual table members feel supported, yet still experience a positive group setting. The table is embedded with 199 load cells and 8358 LED lights, located below the table top surface. Machine learning can be applied to the sensor data from the table to detect weight shifts over the course of a meal, identify individual bite sizes and classify interactions between table members and food items. Simultaneously, the LEDs can be used to provide real-time feedback about eating behavior, give perspective regarding eating choices, or alter the ambience of the eating experience as a whole. Light interactions can change over time and between settings, depending on the composition of the table members at the table or the type of meal that is consumed at the table.

    An indication of the assignments that are possible within this topic:

    -          Adding intelligence to the table. Are we able to track the course of the meal over time? This includes questions such as: How much has been put on the plates of all table members? At what times have they taken a bite? How much are the putting on their fork? Are they going in for seconds? Etc.

    Keywords: Machine learning, sensor fusion and finding signal characteristics

    -          Creating LED interactions that provide the user with feedback about his/her behavior. Which type of interactions work for a variety of target groups? How should interactions be shaped, such that a single subject in a group feels supported? How can we implicitly steer people towards healthy(er) behavior, without coercion, or without putting the emphasis of the meal on it?

    Keywords: HCI, user experience, co-design, Unity, sensor signals

     -          Togetherness around eating or `commensality' is a relatively new direction for HCI research. Recent work has distinguished `digital commensality', eating together through digital technology, and `computational commensality', physical or mediated multimodal interaction around eating. We are currently exploring how commensality mediated by technology can be used to support dietary behavior change, in a broader concept than the interactive table alone. How does commensality influence dietary habits and how can this influence of commensality be used and acknowledged in dietary behavior change technology?

    Keywords: behavior change strategies, behavior change technology, commensality, technology-mediated commensality

    Wearables to automatically log eating behavior
    Gaining insight into the current eating behavior of a person is a first step in accomplishing better health. Professionals still use conventional methods, such as the use of logbooks, for this. They ask the user to manually report on their eating behavior throughout the day. Memory and logging bias are not uncommon factors associated with this method. Often, users simply forget to write down what and when they have been eating. Also, the presence of unknown ingredients in the food, difficulties in estimating portion size or social discomfort while logging the food affect the reliability of this method.

    One way to lower the chance for bias, is to use technology that automatically detects events of food intake. Accelerometers on the wrist, strain gauges on the jaw or RIP sensors that monitor the breathing signal of the subject are examples of technologies that are used to identify intake gestures and chewing/swallowing movements – indicating that the user is eating. Many of these technologies are not tested outside of a standardized, laboratory environment yet and therefore their practical validity is often unknown – and should be investigated.

    -          We are currently investigating several detection methods individually, and as a combination. Which methods work well in what type of situations? What type of data processing steps should be taken to get there? We are still measuring at lab-level and want to bring this to an in-the-wild setting.

    Keywords: data gathering, user testing, data processing, machine learning

    Cooking skills and kitchen habits
    Eating is often the end-point of preparing a meal. Eating healthy often starts by cooking healthy and picking your ingredients. But what if you do not have the advanced cooking skills that are needed to follow a certain recipe or prepare the ingredients correctly? What if your cooking skills hold you back from trying out new recipes? What if your perception of your eating habits differ from your actual eating habits? For instance, what if your kitchen habits are such that you always grab a bag of chips once you arrive home from work, or that you very often consume snacks while you might think this only happens occasionally during the week?

    By tracking and processing data that is gathered in and around the kitchen area, we could gain better insights in the eating habits of individuals. This might be an important step in supporting the individual towards healthier behavior.

    -          We are currently investigating several technologies and measurement set-ups that are needed to support this type of research. What type of sensors should be placed at what locations in the house? How do they communicate together? What type of information should be gathered and could serve as a trigger for feedback towards the user? In what way can we support the user to choose different ingredients, try out new recipes or break his unwanted eating habits?

    Keywords: Design of sensor systems, prototyping, data gathering, data processing, user interactions

    Operationalizing behavior change strategies
    Many strategies to try to support or influence us in changing our behaviors are exposed to us daily. Just think of the app that wants you to set ‘goal for the week’ (or sets it for you), or the website that informs you that ‘there is only one left!’. These features are usually based on a theorical understanding of what influences us. For example, a goal setting feature can be based on goal setting theory. However, goal setting theory argues that for goals to motivate us, they have to be feasible as well as challenging. Another theory that is used as a theorical underpinning of a feature is social comparison theory, which argues that can be motivated be comparing themselves to others (either, upward, downward, or lateral comparison). An example of how this is implemented is a leaderboard, where you can see how you are doing with respect to a certain statistic. However, is a leaderboard really a good operationalization of social comparison theory? And is an app with a textbox where you can set a goal, really a good operationalization of goal setting theory? What can we learn from the features about the theory they are based on when these features work or do not work?

     -          These are questions that we would like investigated, for theories and features used as an example, but also for other features and theories.

    Keywords: behavior change theory, design, behavior change strategies, behavior change technology  

  • CHATBOTS FOR HEALTHCARE – THE eCG FAMILY CLINIC @ HMI - Enschede, NL in cooperation with UMCU - Utrecht, NL

    In collaboration with Universitair Medisch Centrum Utrecht we will design and develop the eCG family clinic: the electronic Cardiovascular Genetic family clinic to facilitate genetic screening in family members. In inherited cardiovascular diseases, first-degree relatives are at 50% risk of inheriting the disease-causing mutation. For these diseases, preventive measures and treatment options are readily available and effective. Relatives may undergo predictive DNA testing to find out whether they carry the mutation. More than half of at-risk relatives do not attend genetic counselling and/or cardiac evaluation.

    In order to increase the group of people that will attend the genetic counseling and/or cardiac evaluation the eCG family clinic will be developed. eCG Family Clinic is an online platform where family members are provided with general information (e.g. on the specific family disease, mode of inheritance, pros and cons of genetic testing and the testing procedure). The users of the platform will be able to interact with a chatbot.

    Within this research project we have student assignments available such as:
    ·       Designing and developing of a chatbot and its functions and roles within the platform
    ·       Translating current treatment protocols into prototypes of the chatbot
    ·       Evaluating user experience and user satisfaction

    We are open for alternative assignments or perspectives on the example assignments above.

    Contact person: Randy Klaassen, r.klaassen@utwente.nl

  • Touch Interactions and Haptics @ HMI - Enschede, NL

    In daily life, we use our sense of touch to interact with the world and everything in it. Yet, in Human-Computer Interaction, the sense of touch is somewhat underexposed; in particular when compared with the visual and auditory modalities. To advance the use of our sense of touch in HCI, we have defined three broad themes in which several assignments (Capita Selecta, Research Topics, Graduation Projects) can be defined. 

    Designing haptic interfaces

    Many devices use basic vibration motors to provide feedback. While such motors are easy to work with and sufficient for certain applications, the advances in current manufacturing technologies (e.g. 3D printing) and in electronics provide opportunities for creating new forms of haptic feedback. Innovative forms of haptic feedback may even open up complete new application domains. The challenge for the students is twofold: 1.) Exploring the opportunities and limitations of (combinations of) materials, textures, and (self-made) actuators, and 2.) coming up with potential use cases.

    Multimodal perception of touch

    The experience of haptic feedback may not only be governed by what is sensed through the skin, but may also be influenced by other modalities; in particular by the visual modality. VR and AR technologies are prime candidates for studying touch perception, and haptic feedback is even considered ‘the holy grail’ for VR. Questions surrounding for instance body ownership in VR, or visuo-haptic illusions in VR (e.g. elongated arms, a third arm) can be interesting starting points for developing valuable multimodal experiences, and for studying the multimodal perception of touch.

    Touch as a social cue

    Research in psychology has shown that social touch (i.e. being touched by another person) can profoundly influence both the toucher and the recipient of a touch (e.g. decreasing stress, motivating, or showing affect). Current technologies for remote communication could potentially be enriched by adding haptic technology that allows for social touch interactions to take place over a distance. In addition, with social robots becoming more commonplace in both research and everyday life, the question arises how we should engage in social touch with such social robots in a beneficial, appropriate and safe manner. Applications of social touch technology can range from applications related to training and coaching, to entertainment, and to providing care and intimacy. Potential projects in this domain could focus on the development of new forms of social touch technology (interactions), and/or on the empirical investigation of the effects such artificial social touch interactions can have on people.

    Contact: Dirk Heylen, d.k.j.heylen@utwente.nl

  • Wearables and tangibles assisting young adults with autism in independent living @ IDE - Enschede, NL

    In this project we seek socially capable and technically smart students with interest in technology and health care to investigate how physical-digital technology may support young adults with autism (age 17-22) developing independence in daily living. In this project we build  further on insights from earlier projects such as Dynamic Balance and MyDayLight.

    (see more about both projects here: http://www.jellevandijk.org/embodiedempowerment/ )

    Your assignment is to engage in participatory design in order to conceptualize, prototype and evaluate a new assistive product concept, together with young adults with autism, their parents, and health professionals. You can focus more on the design of concepts, the prototyping of concepts, technological work on building an adaptive flexible platform that can be personalized by each individual user, or working on developing the ‘co-design’ methods we use with young adults with autism, their parents, and the care professionals.

    As a starting point we consider opportunities of wearables with bio-sensing in combination with ambient intelligent objects (internet-of-things e.g. interactive light, ambient audio) in the home.

    The project forms part of a research collaboration with Karakter, a large youth psychiatric health organization and various related organizations, who will provide participating families. One goal is to present a proof-of-concept of a promising assistive device – another goal is to explore the most suitable participatory design methods in this use context. Depending on your interests you can focus more on the product or on the method. The ultimate goal of the overall research project is to realize a flexible, adaptive interactive platform that can be tailored to the needs of each individual user – this master project is a first step into that direction.

    Contact: jelle.vandijk@utwente.nl

  • Interactive Surfaces and Tangibles for Creative Storytelling @ HMI - Enschede, NL

    In the research project coBOTnity a collection of affordable robots (called surface-bots) was developed for use in collaborative creative storytelling. Surfacebots are moving tablets embodying a virtual character. Using a moving tablet allows us to show a digital representation of the character’s facial expressions and intentions on screen while also allowing it to move around in a physical play area. 

    The surfacebots offer diverse student assignment opportunities in the form of Capita Selecta, HMI project, BSc Research or Design project, or MSc thesis research. These assignments can deal with technology development aspects, empirical studies evaluating the effectiveness of some existing component, or a balance of both types of work (technology development + evaluation).

    Just as a sample of what could be done in these assignments, but not limited to, students could be interested in developing new AI for the surfacebot to become more intelligent and responsive in the interactive space, studying interactive storytelling with surfacebots, developing mechanisms to orchestrate multiple surfacebots as an expression means (e.g. to tell a story), evaluating strategies to make the use of surfacebots more effective, developing and evaluating an application to support users’ creativity/learning, etc.

    You can find more information about the coBOTnity project at: https://www.utwente.nl/ewi/hmi/cobotnity/

    Contact: Mariët Theune (m.theune@utwente.nl)

  • Sports, Data, and Interaction: Interaction Technology for Digital-Physical Sports Training @ HMI - Enschede, NL

    The proposed project focuses on new forms of (volleyball and other) sports training. Athletes perform training exercises in a “smart sports hall” that provides high quality video display across the surface of the playing field and has unobtrusive pressure sensors embedded in the floor, or using smart sports setups such as immersive VR with a rowing machine. A digital-physical training system offers tailored, interactive exercise activities. Exercises incorporate visual feedback from the trainer as well as feedback given by the system. They can be tailored through a combination of selection of the most fitting exercises and setting the right parameters. This allows the exercises to be adapted in real time in response to the team’s behaviour and performance, and to be selected and parameterized fitting to their levels of competition and to demands of, e.g., youth sport. To this end, expertise from the domains of embodied gaming and instruction and pedagogy in sports training are combined. Computational models are developed for the automatic management of personalization and adaptation; initial validation of such models is done by repeatedly evaluating versions of the system with athletes of various levels. We collect, and automatically analyse, data from the sensors to build continuous models of the behaviour of individual athletes as well as the team. Based on this data, the trainer or system can instantly decide to change the ongoing exercises, or provide visual feedback to the team via the displays and other modalities. In extrapolation, we foresee future development towards higher competition performance for teams, by building upon the basic principles and systems developed in this project. 

    Assignments in this project can be done on user studies, automatic behaviour detection from sensors, novel interactive exercise design, and any other topic.

    Contact person: Dees Postma, d.b.w.postma@utwente.nl, Dennis Reidsma, d.reidsma@utwente.nl

  • Dialogue and Natural Language Understanding & Generation for Social and Creative Applications @ HMI - Enschede, NL

    Applications involving the processing and generation of human language have  become increasingly better and more popular in recent years; think for example of automatic translation and summarization, or of the virtual assistants that are becoming a part of everyday life. However, dealing with the social and creative aspects of human language is still challenging. We can ask our virtual assistant to check the weather, set an alarm or play some music, but we cannot have a meaningful conversation with it about what we want to do with our life. We can feed systems with big data to automatically generate texts such as business reports, but generating an interesting and suspenseful novel is another story.

    At HMI we are generally open to supervising different kinds of assignments in the area of dialogue and natural language understanding & generation, but we are specifically interested in research aimed at social and creative applications. Some possible assignment topics are given below.

    Conversational agents and social chatbots. The interaction with most current virtual assistants and chatbots (or 'conversational agents') is limited to giving them commands and asking questions. What we want is to develop agents you can have an actual conversation with, and that are interesting to engage with. An important question here is, how can we keep the interaction interesting over a longer period of time? Assignments in this area can include question generation for dialogue (so the agent can show some interest in what you are telling them), story generation for dialogue (so the agent make a relevant contribution to the current conversation topic) and user modeling via dialogue (so the agent can get to know you). The overall goal is to create (virtual) agents that show verbal social behaviours. In the case of embodied agents, such as robots or virtual characters, we are also interested in the accompanying non-verbal social behaviours.

    Affective language processing or generation. Emotions are part of everyday language, but detecting emotions in a text, or having the computer produce emotional language, are still challenging tasks. Assignments in this area include sentiment analysis in texts, for Dutch in particular, and generating emotional language in for example the context of games (emotional character dialogue or 'flavor text' as explained above) or in the context of automatically generated soccer reports.

    Creative language generation. Here we can think of generating creative language such as puns, jokes, and metaphors but also stories. It is already possible to generate reports from data (for example sports or game-play data) but such reports tend to be boring and factual. How can we give them a more narrative quality with a nice flow, atmosphere, emotions and maybe even some suspense? Instead of generating non-fiction based on real-world data, another area is generating fiction. An example is generating so-called 'flavor text' for use in games. This is text that is not essential to the main game narrative, but creates a feeling of immersion for the player, such as fictional newspaper articles and headlines or fake social media messages related to the game. Another example of fiction generation is the generation of novel-length stories. Here an important challenge is how to keep the story coherent, which is a lot more difficult for long texts than for short ones.

    Contact: Mariët Theune (m.theune@utwente.nl)

  • Group activity detection @ HMI - Enschede, NL

    Building social interaction is necessary for both mental and physical health. Participating in group activities encourage social interaction. While there is the opportunity for attending a variety of different group activities, some prefer to do solitary activities. This project aims to design an algorithm to extract the pattern of group and solitary activities from GPS (Global Positioning System) and motion sensors including accelerometer, gyroscope, and magnetometer. The extracted pattern would able us to detect whether the individual is involved in a group or solitary activity.

    This project is defined within a larger project, namely the Schoolyard project. In the schoolyard project, we captured data from pupils in the school playground during the break via GPS and motion sensors. The collected data will be used to validate the designed algorithm. You need to be creative in designing the method to cover different types of group activities in the playground including parallel games (e.g., swings), ball games (e.g., football), tag games (e.g., catch and run), etc.

    The research involves steps such as:

    ·        Literature review 

    ·        Data preparation and identifying benchmark datasets

    ·        Designing an algorithm to identify the group activity patterns

    ·        Validate the result via ground truth/simulated data/benchmark datasets

    We are looking for candidates that match the following profile:

    ·        Having a creative mindset 

    ·        Strong programming skills in python 

     

    Many recent studies have focused on detecting group activities from videos. However, using videos to detect the activity is computationally expensive and has a high privacy concern. Below is a related paper to this topic which used the motion sensor with beacons to identify the group activity. 

    https://www.sciencedirect.com/science/article/pii/S0360132319303348

    For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: m.baratchi@liacs.leidenuniv.nl.

    You will be jointly supervised by Dr. Gwenn Englebienne, Assistant Professor, University of Twente, and with the external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.

  • A Framework for Longitudinal Influence Measurement between Spatial Features and Social Networks @ HMI - Enschede, NL

    The features of the environment may enhance or discourage social interactions among people. The question is how environmental features influence social participation and how the influence may vary over time. To answer this question, you need to design a framework that combines features of the spatial network with the parameters of the social network while addressing the longitudinal characteristics of such a combination.

    To the best of our knowledge, no study has been conducted on analyzing the longitudinal influence between social networks and spatial features of the environment.

    This project is defined within a larger project, namely the schoolyard project. In the schoolyard project, we observed the behavior of the children in the playground using RFID tags and GPS loggers. The RFIDs are used to build a social network. The longitudinal influence between the social network and spatial features may be analyzed in three stages: 1) before the renovation 2) after the renovation 3) after adaptation of the playground. The collected data can be used to validate the designed framework.

    We are looking for candidates that match the following profile:

    ·        Knowledge about network analysis

    ·        Knowledge about multilevel time series analysis

    ·        Strong programming skills in python 

     

    This paper presents a general framework for measuring the dynamic bidirectional influence between communication content and social networks. They used a publication database to make the social network and its relationship with the concept of connection is studied longitudinally. 

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.208.4144&rep=rep1&type=pdf

     

    For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: m.baratchi@liacs.leidenuniv.nl

    You will be jointly supervised by Dr. Shenghui Wang, Assistant Professor, University of Twente, and with external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.

  • Human behaviour modelling for Teleoperated Robots @ HMI - Enschede, NL

    Description

    Teleoperated robots enable humans to be remotely present in the world to perform (maintenance) tasks or be socially present. This has many applications and benefits, as operators can apply their expertise without the need to travel to a possibly remote or dangerous environment. If there are time delays in teleoperated systems, for example because of networking issues or physical distance, they become very difficult to use. There are multiple strategies which can be employed to deal with these difficulties. These approaches require the interpretation of how humans interact with the environment, and in this field you can do your study.

    Some examples of specific assignments you can do are

    1. Compare image segmentation (neural network) models to identify how to interact with the environment (eg. how do I grasp a mug differently from a pen) and what surroundings we are dealing with. This involves a theoretical comparison with (optionally) a practical component in which several models can be tested and compared.

    2. Set up user studies to examine how people orient themselves and manipulate objects in VR, with the goal of transferring this knowledge to the teleoperated robotics space. This can relate to visual orientation or to object manipulation and for example take the shape of an eye tracking study.

    3. Set up a study with a teleoperated robot to investigate the effects of time delays on several aspects of the interaction. This line requires both research and technical skills.

    Other assignments in this field can be discussed based on your skillset.

    Contact: Luc Schoot Uiterkamp l.schootuiterkamp@utwente.nl
    Second supervisor: Gwenn Englebienne g.englebienne@utwente.nl

  • Developing a Touch-Sensitive Ball and Analysing its Data @ HMI - Enschede, NL

    Description

    • In this project, we want to extend the touch-sensitive "skin" technology to measure touch on a ball or spherical object such as a football. The skin was developed to make robot shells aware of human touch, and is also used in the Touch-Sensitive Patch used in Module 7 CreaTe, but lacks some key characteristics that would be required to be able to measure how a ball is handled:
    • We can create touch-sensitive surfaces of any 2D shape, and we can make these surfaces stretchable to an extent, but it is not obvious how to create a reliable, touch-sensitive, closed spherical surface. The batteries, sensors, wireless communication devices and electronics for the sensing would need to be inside the sphere, preferably in centrally positioned to keep the weight distribution of the ball balanced. Ideally, it should also be easy to charge without opening up. The electronics need to be robust enough and protected from external forces to survive playing with the ball, including resisting being kicked hard. Ideally, the ball should be inflatable and have similar handling characteristics to a "dumb" ball and be sensitive to impact with a shoe (being kicked) as well as to the touch with a hand or arm.
    • Initial prototypes need not fulfil all these characteristics, but these are the guideline to the developments. An additional challenge for students interested in data analysis is in tracking both the orientation of the ball (pitch,roll, yaw) and the location of the touch, as well as in recognising how the ball is being handled.

    Contact person:
    Gwenn Englebienne
    g.englebienne@utwente.nl


COMPANIES, EXTERNAL RESEARCH INSTITUTES, AND END USER ORGANISATIONS

Here you find some of the organisations that are willing to host master students from HMI. Keep in mind that you are not allowed to have both an external (non-research institute) internship and an external final assignment. If you work for a company that is interested in providing internships or final assignments please contact D.K.J.Heylen[at]utwente.nl

  • Biobank catalogue using automated informational retrieval and AI solutions @ Amsterdam UMC, NL

    Background
    Cancer Center Amsterdam connects cancer researchers and care professionals within Amsterdam UMC. To facilitate translational cancer research (i.e. the validation of laboratory findings within a clinical context, for example using blood or urine samples collected from patients with cancer) a central biobanking organization has been established. This Liquid Biopsy Center (LBC) collects blood and urine samples through centralized logistics and harmonized protocols and makes these samples available to cancer researchers. For good research, it is of the greatest importance that samples are clinically annotated with relevant information on diagnosis, disease stage, and treatment interventions and outcome. This information is recorded in the hospital electronic health record (EHR), mostly in the form of free text. Consequently, clinical data management is still mostly done through manual data entering into a clinical database. Automated solutions are needed to improve this labor-intensive and inefficient way of data management.

    Methods
    LBC supports 16 biobank projects based on related tumor type (for example lung cancer, colon cancer, hematological tumors). Over all projects, more than 4500 patients donated almost 9000 samples that are stored in the freezers for future research projects. Using data extractions of structured data from the EHR, a first setup of a comprehensive sample dashboard has been created for the lung cancer biobank using Power BI. Retrieval of specific samples through this dashboard can largely be improved if information from the EHR stored as free text – for example from radiologie or pathology reports – can be added. This project aims to improve the existing dashboard as well as to expand and tailor it to other biobanks.

    End product
    The envisioned end product is a user dashboard that couples all available information from different databases, including free text from the EHR. The dashboard can be used to retrieve specific samples requested by cancer researchers for use in a dedicated research project. The first setup in Power BI can be used as a basis for this project. Use of other platforms can be discussed, but restrictions involving working with sensitive health data apply. This means that data cannot leave the secured hospital environment and only platforms that work within the (remote) hospital server can be used. If successful, the end product will be widely shared with researchers to set an example of improve data management solutions.

    Learning experience
    This project offers the student to get acquainted with the academic health care and research sector along with its promises and pitfalls regarding the use of health care data for research, in this instance specifically cancer biobank research. Innovations in the use of data for improved health care planning and research are urgently needed and at the same time hampered by strict laws and regulations. This project offers the opportunity to learn from all the aspects involving medical research and at the same time work on technical solutions that will actually improve research.

    Contact person Human Media Interaction (UT):
    Shenghui Wang, shenghui.wang@utwente.nl

  • Virtual Reality Safety Training @ Heijmans - Rosmalen, NL

    Heijmans is a listed company that combines activities related to property development, construction & technical services and infrastructure in the fields of Living, Working and Connecting. Heijmans realizes projects for home buyers, companies and government entities and together we are building on the spatial contours of tomorrow.

    Heijmans applies Extended Reality (XR) in several ways, of which Virtual Reality safety training is currently the most widely adopted application of XR. Other XR applications that we are exploring are Mixed Reality Mock-ups, Augmented Reality (tablets) on construction sites and multi-stakeholder collaboration in VR Building Information Models.

    Our VR safety training modules are specifically made for Heijmans workmen and are aimed at improving safety awareness and proactive safety behavior. In these modules, workmen go through a scenario in VR, that can end in a virtual incident if one or more procedural steps regarding safety are performed incorrectly. Hereby, workmen are ‘learning by doing’ and unsafe situations at work can be simulated in a safe environment.

    We are currently looking for students that can enrich our understanding on topics like, but not limited to:

    • Learning effect of VR safety modules
    • User experience of VR safety modules
    • Multiplayer coordination / effect in VR safety training

    If you are interested in exploring XR interaction in a large company with an interesting target group, please contact us. All ideas are welcome; we are sure that we can set up a great research case together!

    Contact Heijmans: Thomas Smits (tsmits2@heijmans.nl) and Bert Weeda (bweeda@heijmans.nl)

    Contact HMI: Dennis Reidsma (d.reidsma@utwente.nl)

    Article VR at Heijmans : Veiliger werken dankzij virtual reality | Heijmans N.V.

  • Internships at Soundlab, for music/sound-minded creative designers and builders @ Wilmink theatre, ArtEZ conservatory Enschede and HMI, NL

     

    Wilmink theatre and ArtEZ conservatory Enschede collaborate in the development of Soundlab Enschede: a semi-permanent workshop for sound exploration and interactive sound experience. At Soundlab, children and adults explore sound and music through existing and new acoustic and technology-enhanced or digital musical instruments, and interactive sound installations.

    Two years ago we started with the development of Soundlab. During the course of four months, five music teachers-in-training and a UT student of MSc Interaction Technology collaboratively created a singing-app and a music game for the interactive projection floor at the entrance of the Designlab. Last year this was continued with seven music teachers-in-training and two UT students of MSc Interaction Technology who collaboratively created interactive sound installations.

    This year we want to continue development and expand the number of available technologies visitors of Soundlab can explore. Therefore, we have two internships vacant.

    As an intern

    • You will help to innovate music education
    • You will work in collaboration with ArtEZ students (music teachers) and supervisors of Soundlab (both ArtEZ and Wilmink Theatre)
    • You will work partly at Wilmink theatre, ArtEZ Conservatory Enschede (Muziekkwartier), at the UT (e.g. DesignLab)
    • You will participate in (further) developing and building of interaction technology of previous years
    • You will design and build new interactive technology, including ideation, tests, and evaluation together with students of ArtEZ and children of primary schools

    Partner of Soundlab Enschede is the already established Soundlab in Amsterdam (see:
    https://www.muziekgebouw.nl/pQoxDSw/jeugd/soundlab).

    Contact

    Benno Spieker (ArtEZ Conservatory and PhD-student at UT) for more info: b.p.a.spieker@utwente.nl

  • Digital solutions for livestock management @ Nedap - Groenlo, NL

    Nedap Livestock Management develops digital solutions for, among others, dairy farming and pig farming. They are open to internships and thesis projects; some examples of possible project topics can be found below. When you are interested, feel free to contact Robin Aly for more information.

    Nedap contact: Robin Aly, robin.aly@nedap.com
    HMI contact: Dennis Reidsma, d.reidsma@utwente.nl

    Virtual Fencing
    The goal of this project is to define a product concept for virtual fencing. Cows on pastures need enough feed to graze. Farmers are faced with the challenge to manage the available land to ensure the herd constantly has sufficient feed available. Traditional approaches to this problem move physical fences to direct a herd to the later pastures. However, this process is labor intensive and slow in reacting to changing environments. Virtual fencing [1-4] has been recently defined as a means of interacting with cows based on its location using a reward and punishment system to give them incentives to move to more suitable pastures. This project will investigate solutions for a virtual fencing product.

    The project will start with an ideation process with farmers that will define potentially feasible cow locating solutions, ways to define a virtual fence, and ways to interact with cows to steer them. In a second step, at least one of these ideas will be extended to a high fidelity prototype and evaluate for its performance.

    [1] https://www.wur.nl/en/article/Virtual-fencing-grazing-without-visible-borders.htm
    [2] https://www.smartcompany.com.au/startupsmart/news/halter-29-million-agtech-dairy-cows/
    [3] Anderson, D. M. (2007). Virtual fencing–past, present and future1. The Rangeland Journal29(1), 65-78.
    [4] Campbell, D. L., Lea, J. M., Haynes, S. J., Farrer, W. J., Leigh-Lancaster, C. J., & Lee, C. (2018). Virtual fencing of cattle using an automated collar in a feed attractant trial. Applied animal behaviour science200, 71-77.

    Locating Pigs
    The goal of this project is to provide means for farmers to their locate pigs. Nowadays, professional pig farms can have thousands of pigs. When kept in group housing concepts farmers are often faced with the task to locate individual pigs in these groups, for example to diagnose or treat illnesses. Locating pigs, however, is currently a cumbersome and time-consuming task as groups can be large. Currently, identifying an individual pig can only be done up close. This project requires an end-to-end development of a product concept following the design thinking process, including ideation with stake holders and creation of a prototype.  

    Potential solutions to pig locating consist of centralized position systems, collaborative positioning systems, systems that detect crossing between demarked areas, and systems sensing the coarse area where a pig resides. These solutions are constrained by the investment that they ask from the farmer and differ in how far they satisfy his or her need for pig locating. Therefore, the projects will start with an ideation session with farmers and other stakeholders to define a suitable solution space. The ideation will be supported by low fidelity prototype that facilitate the discussion about the concepts. An important output of ideation is also the identification of key performance measures that can be used to judge the quality of a system.

    Based on the output of the ideation step, at least one high fidelity prototype will have to developed and evaluated.

    [1] Zhuang, S., Maselyne, J., Van Nuffel, A., Vangeyte, J., & Sonck, B. (2020). Tracking group housed sows with an ultra-wideband indoor positioning system: A feasibility study. Biosystems Engineering200, 176-187.
    [2] Koyuncu, H., & Yang, S. H. (2010). A survey of indoor positioning and object locating systems. IJCSNS International Journal of Computer Science and Network Security10(5), 121-128.
    [3] Fukuju, Y., Minami, M., Morikawa, H., & Aoyama, T. (2003, May). DOLPHIN: An autonomous indoor positioning system in ubiquitous computing environment. In Proceedings IEEE Workshop on Software Technologies for Future Embedded Systems. WSTFES 2003 (pp. 53-56). IEEE.
    [4] Mainetti, L., Patrono, L., & Sergi, I. (2014, September). A survey on indoor positioning systems. In 2014 22nd international conference on software, telecommunications and computer networks (SoftCOM) (pp. 111-120). IEEE.

  • Adding Speech to Multi Agent Dialogues with a Council of Coaches @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL

    Context

    In the EU project Council of Coaches (COUCH) we are developing a team of virtual coaches that can help older adults achieve their health goals. Each coach offers insight and advice based on their expertise. For example, the activity coach may talk about the importance of physical exercise, while the social coach may ask the user about their friends and family. Our system enables fluent multi-party interaction between multiple coaches and our users; in addition to talking directly with the user, the coaches may also have dialogues amongst themselves. Integration of full spoken interaction with the platform developed in COUCH will ake a major leap possible for our embodied agent projects.

    More information: https://cordis.europa.eu/project/id/769553 

    Challenge

    Currently in COUCH the user interacts with the coaches by selecting one of several predefined multiple-choice responses on a tablet or computer interface. Although this is a reliable way to capture input from the user, it may not be ideal for our target user group of older adults. Perhaps spoken dialogue can offer a better user experience?

    In the past, researchers found that it was quite difficult to sustain dialogues that relied on automatic speech recognition (ASR) (e.g. see[1] Bickmore & Picard, 2005). However, recent commercial systems like Apple’s Siri and Amazon’s Alexa offer considerable improvements in recognising user’s speech. Such state of the art systems might now be sufficiently reliable for supporting high-quality spoken dialogues between our coaches and the user.

    Assignment

    In your project you will adapt the COUCH system to support spoken interactions. In addition to incorporating ASR, you will investigate smart ways to organise the dialog to facilitate adequate recognition in noisy and uncertain settings while keeping the conversation going. Finally, you will also evaluate the user experience and the quality of dialog progress in various settings and thereby the suitability of state of the art speech recognition for live, open-setting spoken conversation.

    You will carry out the work with collaboration between Roessingh Research and Development (http://www.rrd.nl/) and researchers at the Human Media Interaction group of the University of Twente.

    Contact

    Dennis Reidsma (d.reidsma@utwente.nl)

    [1] Timothy W. Bickmore and Rosalind W. Picard. 2005. Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. 12, 2 (June 2005), 293–327. DOI: https://doi.org/10.1145/1067860.1067867

  • Large-scale data mining & NLP @ OCLC - Leiden, NL

    OCLC is a global library cooperative that provides shared technology services, original research and community programs for its membership and the library community at large. Collectively with member libraries, OCLC maintains WorldCat, the world’s most comprehensive database of information about library collections. WorldCat now hosts more than 460 million bibliographic records in 483 languages, aggregated from 18,000 libraries in 123 countries.

    As the WorldCat continues to grow in quantity, OCLC is actively exploring data science, advanced machine learning, linked data and visualisation technologies to improve data quality, transform bibliographic descriptions into actionable knowledge, as well as provide more functionality for professional cataloguers and develop more services for end users of the libraries. 

    OCLC is constantly looking for students who are enthusiastic to advance AI technologies for library and other cultural heritage data. Examples of student assignments are:

    • Fast and scalable semantic embedding for Information Retrieval
    • eXtreme
      Multi-label Text Classification (XMTC) for automatic subject prediction
    • Automatic
      image captioning for Cultural Heritage collections 
    • Entity extraction and disambiguation
    • Entity matching across different media (e.g. books, articles, cultural heritageobjects, etc) or across languages
    • Hierarchical clustering of bibliographic records
    • Constructing knowledge graphs around books, authors, subjects, publishers, etc.  
    • Interactive visualisation of library data on geographic maps and/or along a time dimension
    • Concept drift (i.e., how meaning changes over time) and its effects on Information Retrieval 
    • Scientometrics-related topics based on co-authoring networks and/or citation networks

    More details are available on request. 

    Contact: Shenghui Wang
    Email: shenghui.wang@utwente.nl

  • Robotics and mechatronics @ Heemskerk Innovative Technology (Delft)

    Company Information:

    Heemskerk Innovative Technology provides advice and support to innovative high-tech projects in the field of robotics and mechatronics. Our mission: Convert basic research into innovative business concepts and real-world applications by creating solutions for performing actions where people themselves can not reach: making the world smaller, better integrated and in an intuitive way.

    Focus areas:
    Haptics
    Dexterous manipulation
    Master-slave control
    Dynamic contact
    Augmented Reality

    https://heemskerk-innovative.nl

    Example assignments (to be carried out in the first half of 2021):

    Current assignments focus on user robot interaction, object detection and autonomous object manipulation in real-life settings, human detection and tracking for navigation in human-populated environments as part of developing the ROSE healthcare robot. Background on C++/Python and ROS is a pre for students working on these assignments.

    Contact:
    Mariët Theune (EEMCS) <m.theune@utwente.nl>

  • Addiction, Coaching and Games @ Tactus - Enschede, NL

    Tactus is specialized in the care and treatment of addiction. They offer help to people who suffer from problems as a result of their addiction to alcohol, drugs, medication, gambling or eating. They help by identifying addiction problems as well as preventing and breaking patterns of addiction. They also provide information and advice to parents, teachers and other groups on how to deal with addiction.

    Assignment possibilities include developing game-like support and coaching apps.

    Website: https://www.tactus.nl/enschede

    Contact: Randy Klaassen

  • Stories and Language @ Meertens Institute - Amsterdam, NL

    The Meertens Institute, established in 1926, has been a research institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 1952. They study the diversity in language and culture in the Netherlands, with a focus on contemporary research into factors that play a role in determining social identities in the Dutch society. Their main fields are:

    • ethnological study of the function, meaning and coherence of cultural expressions
    • structural, dialectological and sociolinguistic study of language variation within Dutch in the Netherlands, with the emphasis on grammatical and onomastic variation.

    Apart from research, the institute also concerns itself with documentation and providing information to third parties in the field of Dutch language and culture. We possess a large library, with numerous collections and a substantive documentation system, of which databases are a substantive part.

    Assignments include text mining and classification and language technology, but also usability and interaction design.

    Website of the institute: http://www.meertens.knaw.nl/cms/

    Contact: Mariët Theune

  • Language and Retrieval @ Elsevier - Amsterdam, NL

    Elsevier is the world's biggest scientific publisher, established in 1880. Elsevier publishes over 2,500 impactful journals including Tetrahedron, Cell and The Lancet. Flagship products include ScienceDirect, Scopus and Reaxys. Increasingly, Elsevier is becoming a major scientific information provider. For specific domains, structured scientific knowledge is extracted for querying and searching from millions of Elsevier and third-party scientific publications (journals, patents and books). In this way, Elsevier is positioning itself as the leading information provider for the scientific and corporate research community.

    Assignment possibilities include text mining, information retrieval, language technology, and other topics.

    Contact: Mariët Theune

  • Interactive Technology for Music Education @ ArtEZ - Enschede, NL

    The bachelor Music in Education of ArtEZ Academy of Music in Enschede increasingly profiles itself with a focus on technology in service to music education. Students and teachers apply digital learning methods for teaching music and they experiment with all kinds of digital instruments and music apps. Applying technology in music education goes beyond the application of these tools. Interactive music systems have potential in supporting (pre-service) teachers in teaching music in primary education. Still, much research needs to be done. 

    Current questions include: What is an optimal medium for presenting direct feedback on the quality of rhythmic music making? What should this feedback look like? 

    HMI students are warmly invited to contribute to this research by creating applications concerning feedback and visualisations for rhythmic music making in primary education. Design playful, interactive musical instruments to engage children to play rhythms together. Or come up with interactive (augmented) solutions that support teachers in guiding children making music. 

    You work in collaboration with one of the main teachers in the bachelor Music in Education who is doing his PhD project on this topic.

    Contact: Benno Spieker, Dennis Reidsma

  • Using (neuro)physiological signals @ TNO -- Soesterberg, NL

    At TNO Soesterberg (department of Perceptual and Cognitive Systems) we investigate how we can exploit physiological signals such as EEG brain signals, heart rate, skin conductance, pupil size and eye gaze in order to improve (human-machine) performance and evaluation. An example of a currently running project is predicting individual head rotations from EEG in order to reduce delays in streaming images in head mounted displays. Other running projects deal with whether and how different physiological measures reflect food experience. Part of the research is done for international customers. 

    More examples of projects as reflected in papers are on Google Scholar

    We welcome students with skills in machine learning and signal processing and/or who would like to setup experiments, work with human participants and advanced measurement technology.

    Contact: Jan van Erp <j.b.f.vanerp@utwente.nl>

  • AR for Movement and Health @ Holomoves - Utrecht, NL

    Holomoves is a company in Utrecht that combines Hololens Augmented Reality with expertise in health and physiotherapy, to offer new interventions for rehabilitation and healthy movement in a medical setting. Students can work with them on a variety of assignments including design, user studies, and/or technology development.

    More information on the company: https://holomoves.nl/

    Contact person: Robby van Delden, Dennis Reidsma 

  • Artificial Intelligence & NLP @ Info Support - Veenendaal, NL

    Info Support is a software company that makes high-end custom technology solutions for companies in the financial technology, health, energy, public transport, and agricultural technology sectors. Info Support is located in Veenendaal/Utrecht, NL with research locations in Amsterdam, Den Bosch, and Mechelen, Belgium.

    Info Support has extensive experience when it comes to supervising graduation students. With assignments that do not only have added scientific value, but also impact the clients of Info Support and their clients’ clients. As a university-level graduating student, you will become part of the Research Center within Info Support. This is a group of colleagues who, on top of their job as a consultant, have a strong affinity with scientific research. The Research Center facilitates and stimulates scientific research, with the objective of staying ahead in Artificial Intelligence, Software Architecture, and Software Methodologies that most likely will affect our future.

    Various research assignments in Artificial Intelligence, Machine Learning and Natural Language Processing can be carried out at Info Support.

    Examples of assignments include:

    • finding a way to anonymize streaming data in such a way that it will not affect the utility of AI and Machine Learning models
    • improving the usability of Machine Learning model explanations to make them accessible for people without statistical knowledge
    • generating new scenarios for software testing, based on requirements written in a natural language and definitions of logical steps within the application

    More details are available on request.

    Contact: Mariët Theune