If you as a bachelor or master student are looking for a final thesis assignment, capita selecta, research topic, or internship, you can choose between a large number of internal (at HMI) or external (at a company, research institute, or user organisation) assigments. You can also choose to create one yourself. Searching a suitable location or a suitable topic often starts here: below you find a number of topics you can work on.
Note that in preparation for a final MSc assignment you must carry out a Research Topics resulting in a final project proposal. For Research Topics, you need to hand in this form to the bureau of educational affairs (BOZ).
- Accessing large-scale knowledge graphs via conversations in virtual environments @ HMI - Enschede, NL
In the past decades, more and more Cultural Heritage institutions, such as libraries, museums, galleries and archives, have launched large-scale digitisation processes that result in massive digital collections. This not only ensures the long-term preservation of cultural artefacts in their digital form, but also allows online instant access to the resources that otherwise require physical presence and foster the development of applications like virtual exhibitions and online museums. By embracing Linked Open Data (LOD) principles and Knowledge Graph (KG) technologies, rich legacy knowledge in these CH collections has transformed into a form that is shareable, extensible and easily re-usable. Relevant entities (e.g. people, places, events), their attributes and their relationships are formally represented using international standards, resulting in knowledge graphs that both humans and machines are able to understand and reason about.
However, exploring large-scale knowledge graphs is not trivial. Traditional keywords based search is not ideal for exploring such graph-structured data. A museum visitor may start his/her exploration from certain creative work to different types of entities that it associates with, such as its creator, the place where it was created, or relevant events that happened at the time when it was created, etc. The visitor can follow the links in the knowledge graph to discover what is interesting to them. Thanks to the LOD principles, some external knowledge graphs may also be briefly accessed. What is challenging is how to make this huge amount of information accessible to visitors in an appealing and intuitive manner, so that the interaction between visitors and the knowledge graphs becomes meaningful and enjoyable. Such interaction needs to take into account the visitors' cognitive and information processing capabilities as well as their personal interests and cultural backgrounds.
At HMI, we are investigating how to access large-scale KGs via natural language conversations in virtual environments and welcome students to research on different aspects of this research:
- Developing a KG-based conversational museum guide that models user's interests, introduces art objects, answers questions, provides recommendations, etc.
- User interest modelling Conversational KG-based question answering and recommendations Natural language generation with dynamic subgraph extraction Mixed initiative dialog management with KGs
- Integrating conversational agent in a virtual reality environment
- Multi-model input for user interest detection, including the user's utterances, eye-gaze, speech emotion, gesture, etc. Multi-model responses in virtual reality, including text, voice, highlights, etc.
- Effective and ethical design in collaboration with Humanities researchers and/or Cultural Heritage institutions.
- Effective or inappropriate: Using visitors’ (cultural?) background / profile to generate personalised narratives Methods for visitor-agent interaction to allow for collaborative creation of narratives about cultural history → this could include research on community-driven artifact labelling / label correction, object selection etc. Research on increasing affective impact of interactive virtual environments (for cultural heritage), with the goal of
- increasing visitors’ a) knowledge, b) sense of responsibility, c) sustained interest d) other positive outcome on topic of inclusivity
- Increasing interest and participation of different groups of people that may not typically feel attracted or included in museum exhibitions - e.g. children
Contact: Shenghui Wang (email@example.com)
- Can I touch you online? - Investigating the interactive touch experience of the “Touch my Touch” art installation @ HMI - Enschede, NL
We touch the screen of our cell phone more often than we touch our friends. We stroke and 'swipe' our screen in search of a loved one. Meanwhile, in public spaces, we touch each other, and watch each other being touched less and less. Pandemic regulations have only further increased this physical isolation. “Touch my touch” or TouchMyTouch.net, designed by artist duo Lancel and Maat, is a critical new composition of face recognition, merging and streaming technologies for a poetic encounter through touching and being touched. TouchMyTouch.net is a Streaming Platform for Online Touch and Interactive installation for the streaming platform. The interactive installation will be at the UT for a number of weeks during the second semester of 2022. You can try the online platform with a partner here: https://upprojects.com/projects/touch-my-touch
This master assignment is in relation to the physical “touch my touch” installation. You will define a research question in relation to the touch experience evoked by the art installation. For more information send an email to Judith Weda (firstname.lastname@example.org).
- Coaching for breathing with haptic stimulation @ HMI - Enschede, NL
Breathing has a fundamental contribution to well-being, both physiologically and psychologically. Accordingly, also a number of techniques for well-being build on breathing-techniques, like yoga, TaiChi, meditation, Wim Hof method, and many more. There are also a number of technological products available that offer support for breathing, such as the Spire or apps in smart watches.
With this master project we want to explore the possibilities to support breathing by haptic stimulation and feedback. Stimulation can be used to teach breathing patterns, feedback to signal whether breathing is or is not in the intended range. For this work we will focus on vibration motors for haptic stimulation. Relevant questions to answer here are: where to position vibration motors, what stimulation and feedback patterns are comfortable and effective?
Contact: Angelika Mader, email@example.com
Supervisors: Angelika Mader, Geke Ludden
- MiniSoccerbal 2.0 @ HMI - Enschede, NL
Supervisors: Robbert van Middelaar & Aswin Balasubramiam, Jasper Reenalda, Dennis Reidsma
Student assignments: Bachelor's / master's
Educational programme: (BMT/BME), EE, CreaTe, iTech, CS
This project focuses on extending and improving the already existing MiniSoccerbal (www.MiniSoccerbal.com). This MiniSoccerbal is a small ball (size 2) that is connected to a cord. With this cord, a player can control the ball in his/her nearest space. The end of the cord is always connected to the player, therefore the ball will never leave the area around the player. The MiniSoccerbal is already used by a lot of youngsters (7-13 y/o) of (professional) football clubs. Youngsters can already use over 100 training exercises with the MiniSoccerbal.
Currently, relevant parameters are not measured to analyse the performance. To extend beyond its current capability, the MiniSoccerbal needs to be improved and should measure useful data, allow users to view the data, and interact with the data and other people using an interactive application.
A first project focused on instrumenting the ball with a sensor, to count the number of touches and velocity of the ball. This system worked quite well, however, we want to extend this to the next level, using deep learning models and your own phone camera.
It is a possibility to use a camera to track objects. You can think of security cameras tracking people but there also exist methods to track for example a football on a pitch.
This project will focus on deep learning models to track the ball with your own phone camera. In this way, we can objectify the movements of the MiniSoccerball, like the number of touches, the velocity of the ball or an evaluation if the exercise is performed correctly, without any sensors attached to the player or ball.
This project will be a collaboration between Twentsche Voetbal School/Minisoccerbal, Biomedical Signals and Systems (BSS) and Human Media Interaction (HMI) and is part of the Sports, data & interaction project. The methods will also be implemented in other projects, to quantify other sports movements.
Experience with deep learning models and programming is required.
More information can be found on: http://sports-data-interaction.com/research
- Mediated social touch through a physical avatar and a wearable @ HMI - Enschede, NL
The aim of this master project is to develop a fully functioning social haptic device divided in two main components, a haptic coating for a humanoid robot serving as an avatar and a wearable haptic vest/suit. This project’s workload will be divided in four parts: (1) the design and development of a social touch coat for the robot body, (2) the selection or development of haptic sensors to be dispatched on the avatar’s upper body, (3) designing and developing a pneumatic haptic vest that can be used for both perception experiments and mediated touch and (4) testing and experimenting with a TacticSuit haptic vest (from BHaptics [vibration]). You will find more information on each part below. We intend to have one student working on each part. We support and encourage collaboration between students within the project.
The ANA Avatar Xprize is an International Competition where different teams design and develop their own fully functioning avatar and test it in different scenarios from maintenance tasks to human-human remote interactions. HMI participates in team i-Botics that qualified for the semifinals in September 2021. During this competition, we develop our avatar’s ability to allow advanced social remote interactions between the remote controller of the avatar and the recipient at the avatar location. To that end, this master project has been designed to focus on one important aspect of social interaction, which is touch.
You can find more information on https://avatar.xprize.org/prizes/avatar
The WEAFING project is an EU Horizons 2020 project that aims to develop a wearable for social touch made out of electroactive textile. Electroactive textile is a textile woven or knitted with electroactive yarn. This yarn contracts or expand when an electrical current is applied. Depending on the morphology of the textile we can imagine different types of haptic sensations on the skin. The current interest is the pressure sensation that the garment could generate.
At the UT we do perception research which is key in order to define the specifications of the electroactive textile. Since the textile is still in development we use substitute materials to explore and find the perception parameters of pressure applied to different parts of the body by psychophysical studies.
You can find more information on weafing.eu
Part 1 : Designing a social touch skin for a humanoid robot avatar
This part of the project concerns the design, production and test of a humanoid robot avatar’s “skin” to be used during social touch interaction between the avatar (piloted by a controller) and a recipient who is actually touching the robot. For this part, we expect the student to realize a study on different materials and sensors that could be used for the coating, to design the required product and to make a test with a physical robot.
There are no clear limitations on the selection of the material. However, some criteria of measurement will be clearly emphasized during the project. We will offer help and collaboration in the search for an adequate material. For the sensors, the selected method should be dispatched on the whole upper body of the robot and should be as less intrusive as possible. There may be multiple way to approach this task. We expect the student to find a viable and efficient solution considering the limitations that will be provided during the project such as weight, shape or size of the sensors. Some ways may also be available at the HMI department such as the CapSense vest as a start for the investigation.
We are looking for master students in interaction technology and/or embedded systems. Help will be provided with sewing and designing the “skin”. However, we will consider experiences with sewing, haptic interaction and sensor’s data analysis as a plus.
For more information, please contact Camille Sallaberry, firstname.lastname@example.org
Part 2 : Developing a pneumatic haptic vest for human-human remote touch interaction and psychophysical experiments
For this graduation assignment we are looking for a student to create a haptic (touch) vest with pneumatic actuators. The goal is to use the vest for both psychophysical experiments and mediated touch. The assignment would be a collaboration between two projects namely the WEAFING (weafing.eu) project and the UT entry for the X-prize.
In the weafing project we are developing a textile wearable that can give haptic feedback. In order to do this we need to do perception experiments for pressure on the skin using psychophysical methods to find the parameters of touch perception. A pneumatic vest can help us with these experiments.
Following these experiments we can use the vest for mediated touch applications. This includes mediated social touch and other mediated touch. Touch can for example be mediated through an avatar representing you at an alternative location, the use case of the X-prize project.
There are multiple ways to approach the assignment and multiple actuator option like McKibben muscles or silicone pockets. It is key for psychophysical experiments to measure pressure in the actuator and have precise control of pressure in the actuator. The vest should fit both men and women and some variation of body types.
We are looking for master students in interaction technology or embedded systems. We will offer help with sewing and the vest design, but it is key that you have affinity with making. Experience with sewing is a plus.
There is a body of work as a base, namely a project on pneumatic actuator control, a sleeve with McKibben muscles, and silicon actuators.
For more information please contact Judith Weda, email@example.com
Part 3 : Vibratory Suit for human-human remote touch interaction
For this part of the project, we are looking for a student that will investigate the use and flexibility of vibratory motors to reproduce different kind of social contact/touch during social communication in the context of an interaction between a robot-avatar and a human. The student will also be expected to evaluate existing vibratory haptic suits or to develop one for remote social touch experience.
The experience will require the availability to touch the whole upper body of the avatar with exception to the hands. Thus, we are expecting the haptic suit to be designed with only the top and long sleeves. Some criteria of measurement will also be clearly emphasized during the project.
As one of our basic consideration, the student may start to evaluate with the TacticSuit from BHaptics. The aim of the student should be to test the haptic vest/suit for social touch and determine its usability compared to other possible suits.
During the project, we also encourage the student to tightly collaborate with the student on the “pneumatic vest” project as both students may have to evaluate the social touch experience of both products.
We are looking for master students in interaction technology or embedded systems. We will consider experience with vibratory actuators and experience in social haptics as a plus.
For more information, please contact Camille Sallaberry, firstname.lastname@example.org
- Spoken Interaction with Conversational Agents and Robots @ HMI - Enschede, NL
Speech technology for conversational agents and robots has taken a flight (e.g., Siri, Alexa), but we are not quite there yet. While there are technical challenges to address (e.g., how can an agent display listening behavior such as backchannels "uh-uhm", how can we recognize a user's stance/attitude/intent, how can we express intent without using words, how can an agent build rapport with a user), there are also more human-centered questions such as how to design such a spoken conversational interaction, how do people actually talk to an agent or robot, what effect does a certain agent/robot behavior (e.g., robot voice, appearance) have on a child’s perception and behavior in a specific context?
These are some examples of research questions we are interested in. Much more is possible. Are you also interested? Khiet Truong and Ella Velner can tell you more.
Contact: Khiet Truong email@example.com
- Automatic Laughter analysis in Human-Computer Interaction @ HMI - Enschede, NL
Laughter analysis is currently a hot topic in Human-Computer Interaction. Computer scientists generally study how humans communicate through laughter and how this can be implemented in Automatic Laughter Detection and Automatic Laughter Synthesis. Development of such tools would be very helpful in fields like Human-Computer/Robot Interaction, where voice assistants like Alexa and Google assistant might understand more complex natural communication through the interpretation of social signals such as laughter and generating well timed and appropriate realistic laughter responses. Another application could be multi-media laughter retrieval, automatically extracting laughter occurrences from large amounts of video and audio data, opening the way for retrieving large laughter datasets. As a final example, laughter detection could also possibly be used to study group behavior or automatic person identification.
However there are several challenges in laughter research that need to be considered when aiming for automatic laughter analysis. For one, annotating laughter is a much discussed challenge for the field, there are debates on how laughter should be segmented and labeled. Are there different kinds of laughs for different situations and how do we label these? Do people have specific laughter profiles? How does context play a role in laughter detection? How could a real-time implementation in a conversational agent look like and for what purpose? Students can choose to go for a more human-centered or technology oriented direction.
This makes achieving automatic laugher analysis an interesting goal. Students are invited to explore the topic of Automatic Laughter analysis and come up with an interesting question or challenge they want to address. You will be supervised by assistant professor Khiet P. Truong, who is an expert in laughter research and SSP and PhD student Michel-Pierre Jansen whose PhD work evolves around human laughter recognition and SSP.
Contact: Khiet Truong firstname.lastname@example.org
- Analysis of depression in speech @ HMI - Enschede, NL in cooperation with GGNet Apeldoorn, NL
The NESDO research (Nederlandse Studie naar Depressie bij Ouderen) was a large longitudinal Dutch study into depression in older adults (>60yr old). Older adults with and without depression were followed over a period of six years. Measurements included questionnaires, a medical examination, cognitive tests and information was gathered about mental health outcomes, demographic, psychosocial and cognitive determinants. Some of these measurements were taken in face-to-face assessments. After baseline measurement, face-to-face assessments were held after 2 and 6 years.
Currently, we have a few audio recordings available from the 6-yrs measurement from depressed and non-depressed older persons. We are looking for a student (who preferably has knowledge of Dutch) who is interested in performing speech analyses on these recordings with the eventual goal to detect depression automatically in speech.
This work will be carried out in collaboration with Dr. Paul Naarding, GGNet Apeldoorn.
Contact: Khiet Truong email@example.com
- Cummins, N., Scherer, S., Krajewski, J., Schnieder, S., Epps, J., & Quatieri, T. F. (2015). A review of depression and suicide risk assessment using speech analysis. Speech Communication, 71, 10-49.
- Low, D. M., Bentley, K. H., & Ghosh, S. S. (2020). Automated assessment of psychiatric disorders using speech: A systematic review. Laryngoscope Investigative Otolaryngology, 5(1), 96-116.
- Cummins, N., Matcham, F., Klapper, J., & Schuller, B. (2020). Artificial intelligence to aid the detection of mood disorders. In Artificial Intelligence in Precision Health (pp. 231-255). Academic Press.
- Master’s Assignments: Design of Smart Objects for Hand Rehabilitation after Stroke @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL
Stroke impacts many people and is one of the leading causes of death and disability worldwide – and in the Netherlands , . The predicted acceleration of the ageing population is expected to raise the absolute numbers of stroke survivors that need care . 80% of all stroke patients suffers from function loss and needs professional caregivers ,  and experiences lower quality of life due to their limited ability to participate in social activities, work and engage in daily activities , .
The hand is the highly functional endpoint of the human arm as it enables a vast variety of daily activities related to high quality of life . Only 12% of stroke patients recovers arm and hand function in the first 6 months . For the remaining, the limited ability to use their hand leads to financial and psychological impact on them and their families, as it limits the execution of daily activities . A treatment with substantial evidence for its effectiveness is CIMT (Constrained Induced Movement Therapy) . CIMT usually employs intensive sessions focused on task-specific exercises, combined with constraining the unaffected hand and forcing patients to use their affected hand. CIMT relies on the principle of ‘use it or lose it’  and requires patients using their affected hand.
So far, attempts in creating effective home training methods focus on the direct translation of clinical exercises to home training, by designing them to be executed regardless of the location of the patient . Monitoring with the use of smart objects – accounts for the lack of direct supervision and gaming and virtual reality elements have been added to make training more challenging . Such methods assume that patients are motivated, able and willing to clear time in their schedule to engage in training, and/or to sit down at a specific location in their house to execute it. We need a new method to apply this principle in a more flexible way by engaging people in clinically meaningful activities in their daily routine. This way, patients will seamlessly perform functional training activities at a much higher dose than can be achieved in clinics.
Our key objective is to develop a new method using smart objects in which training exercises will be seamlessly integrated into the daily routine of a patient at home.
This method will aim to use the performance on these activities as a functional training set over the day, leading to improved hand function and therefore motivation to perform the activities again in the future –. Patients will not have to schedule their training, but the exercise will be part of their regular daily activities. We will do this by investigating a way of transferring clinical exercises to a home setting using smart objects. Smart objects can be integrated into the daily activities of patients and trigger (by design) a certain user behaviour. The focus in our proposal is for these objects to go beyond simply monitoring –, and create a stimulating environment where people feel invited to train and intrinsically motivated to perform the task again in the future. Think of a smart toothbrush, that is designed to promote the use of the affected hand and enables operation only when used by this hand! Fundamental research into the transferability of clinical hand rehabilitation to a smart object home-based setting is needed to theoretically underpin our method. Using smart objects and artificial intelligence, personalized health will be more accessible, and the plurality of data will allow future clinicians more flexibility and overall control of the rehabilitation process.
In this assignment, the masters’ student is expected to:
1. Review literature on existing technologies (sensors, actuators, AI, etc.) of smart objects for rehabilitation to identify gaps/opportunities
2. Specify the requirements for design of smart daily objects that can drive seamless rehabilitation with the use of technology
3. Design and validate a product concept in a co-design manner including clinicians, users and developers
What do we offer?
We offer an interdisciplinary network of researchers who among others are experienced in hand rehabilitation and rehabilitation technology (dr. ir. Kostas Nizamis-DPM), Artificial Intelligence, smart technology and stroke rehabilitation (dr. ir. Juliet A.M. Haarman-HMI), and additionally behaviour change and design research (dr. Armağan Karahanoğlu-IxD). Additionally, the student will collaborate close with clinicians from Roessingh Research & Development (RRD) that aspire to be the end users of the product.
 S. S. Virani et al., “Heart Disease and Stroke Statistics—2020 Update,” Circulation, vol. 141, no. 9, Mar. 2020.
 S. Sennfält, B. Norrving, J. Petersson, and T. Ullberg, “Long-Term Survival and Function After Stroke,” Stroke, 2019.
 E. R. Coleman et al., “Early Rehabilitation After Stroke: a Narrative Review,” Current Atherosclerosis Reports. 2017.
 “StatLine.” [Online]. Available: https://opendata.cbs.nl/statline/#/CBS/en/. [Accessed: 21-Apr-2020].
 C. M. Koolhaas et al., “Physical activity and cause-specific mortality: The Rotterdam study,” Int. J. Epidemiol., 2018.
 R. Waziry et al., “Time Trends in Survival Following First Hemorrhagic or Ischemic Stroke Between 1991 and 2015 in the Rotterdam Study,” Stroke, 2020.
 A. G. Thrift et al., “Global stroke statistics,” International Journal of Stroke. 2017.
 W. Pont et al., “Caregiver burden after stroke: changes over time?,” Disabil. Rehabil., 2020.
 P. Langhorne, F. Coupar, and A. Pollock, “Motor recovery after stroke: a systematic review,” The Lancet Neurology. 2009.
 M. J. M. Ramos-Lima, I. de C. Brasileiro, T. L. de Lima, and P. Braga-Neto, “Quality of life after stroke: Impact of clinical and sociodemographic factors,” Clinics, 2018.
 Q. Chen, C. Cao, L. Gong, and Y. Zhang, “Health related quality of life in stroke patients and risk factors associated with patients for return to work,” Medicine (Baltimore)., vol. 98, no. 16, p. e15130, Apr. 2019.
 R. Morris and I. Q. Whishaw, “Arm and hand movement: Current knowledge and future perspective,” Frontiers in Neurology, vol. 6, no. FEB, 2015.
 G. Kwakkel, B. J. Kollen, J. V. Van der Grond, and A. J. H. Prevo, “Probability of regaining dexterity in the flaccid upper limb: Impact of severity of paresis and time since onset in acute stroke,” Stroke, 2003.
 J. E. Harris and J. J. Eng, “Paretic Upper-Limb Strength Best Explains Arm Activity in People With Stroke,” Phys. Ther., 2007.
 G. Kwakkel, J. M. Veerbeek, E. E. H. van Wegen, and S. L. Wolf, “Constraint-induced movement therapy after stroke,” The Lancet Neurology. 2015.
 Y. Hidaka, C. E. Han, S. L. Wolf, C. J. Winstein, and N. Schweighofer, “Use it and improve it or lose it: Interactions between arm function and use in humans post-stroke,” PLoS Comput. Biol., vol. 8, no. 2, 2012.
 Y. Levanon, “The advantages and disadvantages of using high technology in hand rehabilitation,” Journal of Hand Therapy. 2013.
 M. Bobin, M. Boukallel, M. Anastassova, M. Ammi, U. Paris-saclay, and F.- Orsay, “Smart objects for upper limb monitoring of stroke patients during rehabilitation sessions .,” no. August 2018, 2017.
 M. Bobin, F. Bimbard, and M. Boukallel, “Smart Health SpECTRUM : Smart ECosystem for sTRoke patient ’ s Upper limbs Monitoring,” Smart Heal., vol. 13, p. 100066, 2019.
 M. Bobin, H. Amroun, M. Boukalle, M. Anastassova, and M. A. Limsi-cnrs, “Smart Cup to Monitor Stroke Patients Activities during Everyday Life,” 2018 IEEE Int. Conf. Internet Things IEEE Green Comput. Commun. IEEE Cyber, Phys. Soc. Comput. IEEE Smart Data, pp. 189–195, 2018.
 G. Yang, J. I. A. Deng, G. Pang, H. A. O. Zhang, and J. Li, “An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning,” IEEE J. Transl. Eng. Heal. Med., vol. 6, no. May, pp. 1–10, 2018.
 L. Pesonen, L. Otieno, L. Ezema, and D. Benewaa, “Virtual Reality in rehabilitation : a user perspective,” pp. 1–8, 2017.
 A. L. Van Ommeren et al., “The Effect of Prolonged Use of a Wearable Soft-Robotic Glove Post Stroke - A Proof-of-Principle,” in Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, 2018.
 G. B. Prange-Lasonder, B. Radder, A. I. R. Kottink, A. Melendez-Calderon, J. H. Buurke, and J. S. Rietman, “Applying a soft-robotic glove as assistive device and training tool with games to support hand function after stroke: Preliminary results on feasibility and potential clinical impact,” in IEEE International Conference on Rehabilitation Robotics, 2017.
 B. Radder, “The Wearable Hand Robot: Supporting Impaired Hand Function in Activities of Daily Living and Rehabilitation,” University of Twente, Enschede, 2018.
- Supporting healthy eating @ HMI - Enschede, NL
Eating is more than the consumption of food. Eating is often a social activity. We sit together with friends, family, colleagues and fellow students, to connect, share and celebrate aspects of life. Sticking to a personal diet plan can be challenging in these situations. The social uncomfortableness that is associated with having a different diet than the rest of a group greatly contributes to this. Additionally, it is well known that we unconsciously influence each other while we eat. Not just in the type of food that we choose, even the quantity of the food that we consume, or the speed with which we consume the food is affected by our eating partners.
A variety of assignments is available that focuses on this topic. They are specified below.
The interactive dining table
The interactive dining table is created to open up the concept of healthy eating in a social context: where individual table members feel supported, yet still experience a positive group setting. The table is embedded with 199 load cells and 8358 LED lights, located below the table top surface. Machine learning can be applied to the sensor data from the table to detect weight shifts over the course of a meal, identify individual bite sizes and classify interactions between table members and food items. Simultaneously, the LEDs can be used to provide real-time feedback about eating behavior, give perspective regarding eating choices, or alter the ambience of the eating experience as a whole. Light interactions can change over time and between settings, depending on the composition of the table members at the table or the type of meal that is consumed at the table.
An indication of the assignments that are possible within this topic:
- Adding intelligence to the table. Are we able to track the course of the meal over time? This includes questions such as: How much has been put on the plates of all table members? At what times have they taken a bite? How much are the putting on their fork? Are they going in for seconds? Etc.
Keywords: Machine learning, sensor fusion and finding signal characteristics
- Creating LED interactions that provide the user with feedback about his/her behavior. Which type of interactions work for a variety of target groups? How should interactions be shaped, such that a single subject in a group feels supported? How can we implicitly steer people towards healthy(er) behavior, without coercion, or without putting the emphasis of the meal on it?
Keywords: HCI, user experience, co-design, Unity, sensor signals
- Togetherness around eating or `commensality' is a relatively new direction for HCI research. Recent work has distinguished `digital commensality', eating together through digital technology, and `computational commensality', physical or mediated multimodal interaction around eating. We are currently exploring how commensality mediated by technology can be used to support dietary behavior change, in a broader concept than the interactive table alone. How does commensality influence dietary habits and how can this influence of commensality be used and acknowledged in dietary behavior change technology?
Keywords: behavior change strategies, behavior change technology, commensality, technology-mediated commensality
Wearables to automatically log eating behavior
Gaining insight into the current eating behavior of a person is a first step in accomplishing better health. Professionals still use conventional methods, such as the use of logbooks, for this. They ask the user to manually report on their eating behavior throughout the day. Memory and logging bias are not uncommon factors associated with this method. Often, users simply forget to write down what and when they have been eating. Also, the presence of unknown ingredients in the food, difficulties in estimating portion size or social discomfort while logging the food affect the reliability of this method.
One way to lower the chance for bias, is to use technology that automatically detects events of food intake. Accelerometers on the wrist, strain gauges on the jaw or RIP sensors that monitor the breathing signal of the subject are examples of technologies that are used to identify intake gestures and chewing/swallowing movements – indicating that the user is eating. Many of these technologies are not tested outside of a standardized, laboratory environment yet and therefore their practical validity is often unknown – and should be investigated.
- We are currently investigating several detection methods individually, and as a combination. Which methods work well in what type of situations? What type of data processing steps should be taken to get there? We are still measuring at lab-level and want to bring this to an in-the-wild setting.
Keywords: data gathering, user testing, data processing, machine learning
Cooking skills and kitchen habits
Eating is often the end-point of preparing a meal. Eating healthy often starts by cooking healthy and picking your ingredients. But what if you do not have the advanced cooking skills that are needed to follow a certain recipe or prepare the ingredients correctly? What if your cooking skills hold you back from trying out new recipes? What if your perception of your eating habits differ from your actual eating habits? For instance, what if your kitchen habits are such that you always grab a bag of chips once you arrive home from work, or that you very often consume snacks while you might think this only happens occasionally during the week?
By tracking and processing data that is gathered in and around the kitchen area, we could gain better insights in the eating habits of individuals. This might be an important step in supporting the individual towards healthier behavior.
- We are currently investigating several technologies and measurement set-ups that are needed to support this type of research. What type of sensors should be placed at what locations in the house? How do they communicate together? What type of information should be gathered and could serve as a trigger for feedback towards the user? In what way can we support the user to choose different ingredients, try out new recipes or break his unwanted eating habits?
Keywords: Design of sensor systems, prototyping, data gathering, data processing, user interactions
Operationalizing behavior change strategies
Many strategies to try to support or influence us in changing our behaviors are exposed to us daily. Just think of the app that wants you to set ‘goal for the week’ (or sets it for you), or the website that informs you that ‘there is only one left!’. These features are usually based on a theorical understanding of what influences us. For example, a goal setting feature can be based on goal setting theory. However, goal setting theory argues that for goals to motivate us, they have to be feasible as well as challenging. Another theory that is used as a theorical underpinning of a feature is social comparison theory, which argues that can be motivated be comparing themselves to others (either, upward, downward, or lateral comparison). An example of how this is implemented is a leaderboard, where you can see how you are doing with respect to a certain statistic. However, is a leaderboard really a good operationalization of social comparison theory? And is an app with a textbox where you can set a goal, really a good operationalization of goal setting theory? What can we learn from the features about the theory they are based on when these features work or do not work?
- These are questions that we would like investigated, for theories and features used as an example, but also for other features and theories.
Keywords: behavior change theory, design, behavior change strategies, behavior change technology
- CHATBOTS FOR HEALTHCARE – THE eCG FAMILY CLINIC @ HMI - Enschede, NL in cooperation with UMCU - Utrecht, NL
In collaboration with Universitair Medisch Centrum Utrecht we will design and develop the eCG family clinic: the electronic Cardiovascular Genetic family clinic to facilitate genetic screening in family members. In inherited cardiovascular diseases, first-degree relatives are at 50% risk of inheriting the disease-causing mutation. For these diseases, preventive measures and treatment options are readily available and effective. Relatives may undergo predictive DNA testing to find out whether they carry the mutation. More than half of at-risk relatives do not attend genetic counselling and/or cardiac evaluation.
In order to increase the group of people that will attend the genetic counseling and/or cardiac evaluation the eCG family clinic will be developed. eCG Family Clinic is an online platform where family members are provided with general information (e.g. on the specific family disease, mode of inheritance, pros and cons of genetic testing and the testing procedure). The users of the platform will be able to interact with a chatbot.
Within this research project we have student assignments available such as:
· Designing and developing of a chatbot and its functions and roles within the platform
· Translating current treatment protocols into prototypes of the chatbot
· Evaluating user experience and user satisfaction
We are open for alternative assignments or perspectives on the example assignments above.
Contact person: Randy Klaassen, firstname.lastname@example.org
- Touch Interactions and Haptics @ HMI - Enschede, NL
In daily life, we use our sense of touch to interact with the world and everything in it. Yet, in Human-Computer Interaction, the sense of touch is somewhat underexposed; in particular when compared with the visual and auditory modalities. To advance the use of our sense of touch in HCI, we have defined three broad themes in which several assignments (Capita Selecta, Research Topics, Graduation Projects) can be defined.
Designing haptic interfaces
Many devices use basic vibration motors to provide feedback. While such motors are easy to work with and sufficient for certain applications, the advances in current manufacturing technologies (e.g. 3D printing) and in electronics provide opportunities for creating new forms of haptic feedback. Innovative forms of haptic feedback may even open up complete new application domains. The challenge for the students is twofold: 1.) Exploring the opportunities and limitations of (combinations of) materials, textures, and (self-made) actuators, and 2.) coming up with potential use cases.
Multimodal perception of touch
The experience of haptic feedback may not only be governed by what is sensed through the skin, but may also be influenced by other modalities; in particular by the visual modality. VR and AR technologies are prime candidates for studying touch perception, and haptic feedback is even considered ‘the holy grail’ for VR. Questions surrounding for instance body ownership in VR, or visuo-haptic illusions in VR (e.g. elongated arms, a third arm) can be interesting starting points for developing valuable multimodal experiences, and for studying the multimodal perception of touch.
Touch as a social cue
Research in psychology has shown that social touch (i.e. being touched by another person) can profoundly influence both the toucher and the recipient of a touch (e.g. decreasing stress, motivating, or showing affect). Current technologies for remote communication could potentially be enriched by adding haptic technology that allows for social touch interactions to take place over a distance. In addition, with social robots becoming more commonplace in both research and everyday life, the question arises how we should engage in social touch with such social robots in a beneficial, appropriate and safe manner. Applications of social touch technology can range from applications related to training and coaching, to entertainment, and to providing care and intimacy. Potential projects in this domain could focus on the development of new forms of social touch technology (interactions), and/or on the empirical investigation of the effects such artificial social touch interactions can have on people.
Contact: Dirk Heylen
- Wearables and tangibles assisting young adults with autism in independent living @ IDE - Enschede, NL
In this project we seek socially capable and technically smart students with interest in technology and health care to investigate how physical-digital technology may support young adults with autism (age 17-22) developing independence in daily living. In this project we build further on insights from earlier projects such as Dynamic Balance and MyDayLight.
(see more about both projects here: http://www.jellevandijk.org/embodiedempowerment/ )
Your assignment is to engage in participatory design in order to conceptualize, prototype and evaluate a new assistive product concept, together with young adults with autism, their parents, and health professionals. You can focus more on the design of concepts, the prototyping of concepts, technological work on building an adaptive flexible platform that can be personalized by each individual user, or working on developing the ‘co-design’ methods we use with young adults with autism, their parents, and the care professionals.
As a starting point we consider opportunities of wearables with bio-sensing in combination with ambient intelligent objects (internet-of-things e.g. interactive light, ambient audio) in the home.
The project forms part of a research collaboration with Karakter, a large youth psychiatric health organization and various related organizations, who will provide participating families. One goal is to present a proof-of-concept of a promising assistive device – another goal is to explore the most suitable participatory design methods in this use context. Depending on your interests you can focus more on the product or on the method. The ultimate goal of the overall research project is to realize a flexible, adaptive interactive platform that can be tailored to the needs of each individual user – this master project is a first step into that direction.
- Interactive Surfaces and Tangibles for Creative Storytelling @ HMI - Enschede, NL
In the research project coBOTnity a collection of affordable robots (called surface-bots) was developed for use in collaborative creative storytelling. Surfacebots are moving tablets embodying a virtual character. Using a moving tablet allows us to show a digital representation of the character’s facial expressions and intentions on screen while also allowing it to move around in a physical play area.
The surfacebots offer diverse student assignment opportunities in the form of Capita Selecta, HMI project, BSc Research or Design project, or MSc thesis research. These assignments can deal with technology development aspects, empirical studies evaluating the effectiveness of some existing component, or a balance of both types of work (technology development + evaluation).
Just as a sample of what could be done in these assignments, but not limited to, students could be interested in developing new AI for the surfacebot to become more intelligent and responsive in the interactive space, studying interactive storytelling with surfacebots, developing mechanisms to orchestrate multiple surfacebots as an expression means (e.g. to tell a story), evaluating strategies to make the use of surfacebots more effective, developing and evaluating an application to support users’ creativity/learning, etc.
You can find more information about the coBOTnity project at: https://www.utwente.nl/ewi/hmi/cobotnity/
Contact: Mariët Theune (email@example.com)
- Interpersonal engagement in human-robot relations @ HMI - Enschede, NL
Modern media technology enables people to have social interactions with the technology itself. Robots are a new form of media that people can communicate with as independent entities. Although robots are becoming naturalized in social roles involving companionship, customer service and education, little is known about the extent to which people can feel interpersonal closeness with robots and how social norms around close personal acts apply to robots. What behaviors do people feel comfortable to engage in with robots that have different types of social roles, like companion robot, customer service robot and teacher robot? Will robots that people can touch, talk to, lead and follow result in social acceptance of behaviors that express interpersonal closeness between a person and a robot? Are such behaviors intrinsically rewarding when done with a responsive robot?
Contact: Dirk Heylen
- Sports, Data, and Interaction: Interaction Technology for Digital-Physical Sports Training @ HMI - Enschede, NL
The proposed project focuses on new forms of (volleyball and other) sports training. Athletes perform training exercises in a “smart sports hall” that provides high quality video display across the surface of the playing field and has unobtrusive pressure sensors embedded in the floor, or using smart sports setups such as immersive VR with a rowing machine. A digital-physical training system offers tailored, interactive exercise activities. Exercises incorporate visual feedback from the trainer as well as feedback given by the system. They can be tailored through a combination of selection of the most fitting exercises and setting the right parameters. This allows the exercises to be adapted in real time in response to the team’s behaviour and performance, and to be selected and parameterized fitting to their levels of competition and to demands of, e.g., youth sport. To this end, expertise from the domains of embodied gaming and instruction and pedagogy in sports training are combined. Computational models are developed for the automatic management of personalization and adaptation; initial validation of such models is done by repeatedly evaluating versions of the system with athletes of various levels. We collect, and automatically analyse, data from the sensors to build continuous models of the behaviour of individual athletes as well as the team. Based on this data, the trainer or system can instantly decide to change the ongoing exercises, or provide visual feedback to the team via the displays and other modalities. In extrapolation, we foresee future development towards higher competition performance for teams, by building upon the basic principles and systems developed in this project.
Assignments in this project can be done on user studies, automatic behaviour detection from sensors, novel interactive exercise design, and any other topic.
Contact person: Dees Postma, firstname.lastname@example.org, Dennis Reidsma, email@example.com
- Dialogue and Natural Language Understanding & Generation for Social and Creative Applications @ HMI - Enschede, NL
Applications involving the processing and generation of human language have become increasingly better and more popular in recent years; think for example of automatic translation and summarization, or of the virtual assistants that are becoming a part of everyday life. However, dealing with the social and creative aspects of human language is still challenging. We can ask our virtual assistant to check the weather, set an alarm or play some music, but we cannot have a meaningful conversation with it about what we want to do with our life. We can feed systems with big data to automatically generate texts such as business reports, but generating an interesting and suspenseful novel is another story.
At HMI we are generally open to supervising different kinds of assignments in the area of dialogue and natural language understanding & generation, but we are specifically interested in research aimed at social and creative applications. Some possible assignment topics are given below.
Conversational agents and social chatbots. The interaction with most current virtual assistants and chatbots (or 'conversational agents') is limited to giving them commands and asking questions. What we want is to develop agents you can have an actual conversation with, and that are interesting to engage with. An important question here is, how can we keep the interaction interesting over a longer period of time? Assignments in this area can include question generation for dialogue (so the agent can show some interest in what you are telling them), story generation for dialogue (so the agent make a relevant contribution to the current conversation topic) and user modeling via dialogue (so the agent can get to know you). The overall goal is to create (virtual) agents that show verbal social behaviours. In the case of embodied agents, such as robots or virtual characters, we are also interested in the accompanying non-verbal social behaviours.
Affective language processing or generation. Emotions are part of everyday language, but detecting emotions in a text, or having the computer produce emotional language, are still challenging tasks. Assignments in this area include sentiment analysis in texts, for Dutch in particular, and generating emotional language in for example the context of games (emotional character dialogue or 'flavor text' as explained above) or in the context of automatically generated soccer reports.
Creative language generation. Here we can think of generating creative language such as puns, jokes, and metaphors but also stories. It is already possible to generate reports from data (for example sports or game-play data) but such reports tend to be boring and factual. How can we give them a more narrative quality with a nice flow, atmosphere, emotions and maybe even some suspense? Instead of generating non-fiction based on real-world data, another area is generating fiction. An example is generating so-called 'flavor text' for use in games. This is text that is not essential to the main game narrative, but creates a feeling of immersion for the player, such as fictional newspaper articles and headlines or fake social media messages related to the game. Another example of fiction generation is the generation of novel-length stories. Here an important challenge is how to keep the story coherent, which is a lot more difficult for long texts than for short ones.
Contact: Mariët Theune (firstname.lastname@example.org)
- Group activity detection @ HMI - Enschede, NL
Building social interaction is necessary for both mental and physical health. Participating in group activities encourage social interaction. While there is the opportunity for attending a variety of different group activities, some prefer to do solitary activities. This project aims to design an algorithm to extract the pattern of group and solitary activities from GPS (Global Positioning System) and motion sensors including accelerometer, gyroscope, and magnetometer. The extracted pattern would able us to detect whether the individual is involved in a group or solitary activity.
This project is defined within a larger project, namely the Schoolyard project. In the schoolyard project, we captured data from pupils in the school playground during the break via GPS and motion sensors. The collected data will be used to validate the designed algorithm. You need to be creative in designing the method to cover different types of group activities in the playground including parallel games (e.g., swings), ball games (e.g., football), tag games (e.g., catch and run), etc.
The research involves steps such as:
· Literature review
· Data preparation and identifying benchmark datasets
· Designing an algorithm to identify the group activity patterns
· Validate the result via ground truth/simulated data/benchmark datasets
We are looking for candidates that match the following profile:
· Having a creative mindset
· Strong programming skills in python
Many recent studies have focused on detecting group activities from videos. However, using videos to detect the activity is computationally expensive and has a high privacy concern. Below is a related paper to this topic which used the motion sensor with beacons to identify the group activity.
For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: email@example.com.
You will be jointly supervised by Dr. Gwenn Englebienne, Assistant Professor, University of Twente, and with the external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.
- A Framework for Longitudinal Influence Measurement between Spatial Features and Social Networks @ HMI - Enschede, NL
The features of the environment may enhance or discourage social interactions among people. The question is how environmental features influence social participation and how the influence may vary over time. To answer this question, you need to design a framework that combines features of the spatial network with the parameters of the social network while addressing the longitudinal characteristics of such a combination.
To the best of our knowledge, no study has been conducted on analyzing the longitudinal influence between social networks and spatial features of the environment.
This project is defined within a larger project, namely the schoolyard project. In the schoolyard project, we observed the behavior of the children in the playground using RFID tags and GPS loggers. The RFIDs are used to build a social network. The longitudinal influence between the social network and spatial features may be analyzed in three stages: 1) before the renovation 2) after the renovation 3) after adaptation of the playground. The collected data can be used to validate the designed framework.
We are looking for candidates that match the following profile:
· Knowledge about network analysis
· Knowledge about multilevel time series analysis
· Strong programming skills in python
This paper presents a general framework for measuring the dynamic bidirectional influence between communication content and social networks. They used a publication database to make the social network and its relationship with the concept of connection is studied longitudinally.
For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: firstname.lastname@example.org
You will be jointly supervised by Dr. Shenghui Wang, Assistant Professor, University of Twente, and with external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.
Here you find some of the organisations that are willing to host master students from HMI. Keep in mind that you are not allowed to have both an external (non-research institute) internship and an external final assignment. If you work for a company that is interested in providing internships or final assignments please contact D.K.J.Heylen[at]utwente.nl
- iTech for Understanding: Making Complex Industrial Production Processes Insightful through Interactive Technology @ Trumpf - Hengelo, NL
Trumpf is a global high-tech company which specialises in machining tools and lasers. Their software solutions pave the way towards ’Smart Factories’, facilitating the implementation of high-tech processes in industrial electronics.
With the development and evolution of their technology, the products and services that Trumpf offers become increasingly complex to understand. Indeed, with the technology becoming more advanced and complex, the challenge to communicate their unique offerings become more complex too. That is why Trumpf is looking for interns that are able to interactively communicate intricate products and production processes to an interested audience. For this internship, you will design interactive technology (potentially through the use of VR) to give Trumpf’s technology a human touch. Your work will be displayed during the TechniShow - the trade event for industrial technology.
More information about Trumpf can be found at https://www.trumpf.com/nl_NL/
A internship fee applies.
For more information please reach out to:
Dees Postma - University of Twente (email@example.com)
- Personalization of a mental eHealth intervention for older adults who lost their spouse @ Roessingh Research and Development – Enschede, NL
Roessingh Research and Development (RRD) is an internationally recognized scientific research institute with a focus on revalidation technology and eHealth, working on current and future innovations in rehabilitation and chronic care. RRD occupies a unique position between the university and healthcare practice. Together with eight international partners in Switzerland, Portugal, and The Netherlands, RRD is in the midst of developing an online service for mourning older adults that features a conversational agent and adapts its content to the user’s preferences and (clinical) needs. This service is called LEAVES.
At RRD, you have the opportunity to work on the personalization of an online self-help mental eHealth application for older adults who lost their spouse. Ultimately, everyone takes a different road through the valley of grief, but nobody should walk it alone.
In a previous study, RRD brainstormed adaptation mechanisms with an international expert panel in the field of grief and eHealth, involving experts from academia and clinical practice alike. The study yielded a conceptual adaptation model. In this project, you will apply the adaptation model to the recently deployed minimum viable product (MVP) of LEAVES. The model defines four types of adaptations on an abstract level (e.g., changing the order in which the user is exposed to intervention content). Applying the model to LEAVES involves specifying how its mechanisms can be implemented in LEAVES, (iteratively) building software and/or designs prototypes (visual and/or functional) based on our current code base, and evaluating your prototypes in a qualitatively rich fashion with our target group (i.e., (mourning) older adults) and/or clinical experts with respect to user experience and/or expected clinical efficacy.
We are looking for enthusiastic students with a background in design, computer science, psychology, or human-computer interaction, an interest in (mental) eHealth, and (software, UI) prototyping skills. Speaking Dutch is not required, but desirable. In addition, for the duration of the graduation project, you will be part of the LEAVES consortium, meaning that you will join our international project meetings and gain insight into the activities of a European research project.
van Velsen, L., Cabrita, M., Op den Akker, H., Brandl, L., Isaac, J., Suárez, M., ... & Canhão, H. (2020). LEAVES (optimizing the mentaL health and resiliencE of older Adults that haVe lost thEir spouSe via blended, online therapy): Proposal for an Online Service Development and Evaluation. JMIR research protocols, 9(9), e19344.
- Internships at Soundlab, for music/sound-minded creative designers and builders @ Wilmink theatre, ArtEZ conservatory Enschede and HMI, NL
Wilmink theatre and ArtEZ conservatory Enschede collaborate in the development of Soundlab Enschede: a semi-permanent workshop for sound exploration and interactive sound experience. At Soundlab, children and adults explore sound and music through existing and new acoustic and technology-enhanced or digital musical instruments, and interactive sound installations.
Two years ago we started with the development of Soundlab. During the course of four months, five music teachers-in-training and a UT student of MSc Interaction Technology collaboratively created a singing-app and a music game for the interactive projection floor at the entrance of the Designlab. Last year this was continued with seven music teachers-in-training and two UT students of MSc Interaction Technology who collaboratively created interactive sound installations.
This year we want to continue development and expand the number of available technologies visitors of Soundlab can explore. Therefore, we have two internships vacant.
As an intern
- You will help to innovate music education
- You will work in collaboration with ArtEZ students (music teachers) and supervisors of Soundlab (both ArtEZ and Wilmink Theatre)
- You will work partly at Wilmink theatre, ArtEZ Conservatory Enschede (Muziekkwartier), at the UT (e.g. DesignLab)
- You will participate in (further) developing and building of interaction technology of previous years
- You will design and build new interactive technology, including ideation, tests, and evaluation together with students of ArtEZ and children of primary schools
Partner of Soundlab Enschede is the already established Soundlab in Amsterdam (see:
Benno Spieker (ArtEZ Conservatory and PhD-student at UT) for more info: firstname.lastname@example.org
- Digital solutions for livestock management @ Nedap - Groenlo, NL
Nedap Livestock Management develops digital solutions for, among others, dairy farming and pig farming. They are open to internships and thesis projects; some examples of possible project topics can be found below. When you are interested, feel free to contact Robin Aly for more information.
The goal of this project is to define a product concept for virtual fencing. Cows on pastures need enough feed to graze. Farmers are faced with the challenge to manage the available land to ensure the herd constantly has sufficient feed available. Traditional approaches to this problem move physical fences to direct a herd to the later pastures. However, this process is labor intensive and slow in reacting to changing environments. Virtual fencing [1-4] has been recently defined as a means of interacting with cows based on its location using a reward and punishment system to give them incentives to move to more suitable pastures. This project will investigate solutions for a virtual fencing product.
The project will start with an ideation process with farmers that will define potentially feasible cow locating solutions, ways to define a virtual fence, and ways to interact with cows to steer them. In a second step, at least one of these ideas will be extended to a high fidelity prototype and evaluate for its performance.
 Anderson, D. M. (2007). Virtual fencing–past, present and future1. The Rangeland Journal, 29(1), 65-78.
 Campbell, D. L., Lea, J. M., Haynes, S. J., Farrer, W. J., Leigh-Lancaster, C. J., & Lee, C. (2018). Virtual fencing of cattle using an automated collar in a feed attractant trial. Applied animal behaviour science, 200, 71-77.
The goal of this project is to provide means for farmers to their locate pigs. Nowadays, professional pig farms can have thousands of pigs. When kept in group housing concepts farmers are often faced with the task to locate individual pigs in these groups, for example to diagnose or treat illnesses. Locating pigs, however, is currently a cumbersome and time-consuming task as groups can be large. Currently, identifying an individual pig can only be done up close. This project requires an end-to-end development of a product concept following the design thinking process, including ideation with stake holders and creation of a prototype.
Potential solutions to pig locating consist of centralized position systems, collaborative positioning systems, systems that detect crossing between demarked areas, and systems sensing the coarse area where a pig resides. These solutions are constrained by the investment that they ask from the farmer and differ in how far they satisfy his or her need for pig locating. Therefore, the projects will start with an ideation session with farmers and other stakeholders to define a suitable solution space. The ideation will be supported by low fidelity prototype that facilitate the discussion about the concepts. An important output of ideation is also the identification of key performance measures that can be used to judge the quality of a system.
Based on the output of the ideation step, at least one high fidelity prototype will have to developed and evaluated.
 Zhuang, S., Maselyne, J., Van Nuffel, A., Vangeyte, J., & Sonck, B. (2020). Tracking group housed sows with an ultra-wideband indoor positioning system: A feasibility study. Biosystems Engineering, 200, 176-187.
 Koyuncu, H., & Yang, S. H. (2010). A survey of indoor positioning and object locating systems. IJCSNS International Journal of Computer Science and Network Security, 10(5), 121-128.
 Fukuju, Y., Minami, M., Morikawa, H., & Aoyama, T. (2003, May). DOLPHIN: An autonomous indoor positioning system in ubiquitous computing environment. In Proceedings IEEE Workshop on Software Technologies for Future Embedded Systems. WSTFES 2003 (pp. 53-56). IEEE.
 Mainetti, L., Patrono, L., & Sergi, I. (2014, September). A survey on indoor positioning systems. In 2014 22nd international conference on software, telecommunications and computer networks (SoftCOM) (pp. 111-120). IEEE.
- Adding Speech to Multi Agent Dialogues with a Council of Coaches @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL
In the EU project Council of Coaches (COUCH) we are developing a team of virtual coaches that can help older adults achieve their health goals. Each coach offers insight and advice based on their expertise. For example, the activity coach may talk about the importance of physical exercise, while the social coach may ask the user about their friends and family. Our system enables fluent multi-party interaction between multiple coaches and our users; in addition to talking directly with the user, the coaches may also have dialogues amongst themselves. Integration of full spoken interaction with the platform developed in COUCH will ake a major leap possible for our embodied agent projects.
More information: https://council-of-coaches.eu/project/overview/
Currently in COUCH the user interacts with the coaches by selecting one of several predefined multiple-choice responses on a tablet or computer interface. Although this is a reliable way to capture input from the user, it may not be ideal for our target user group of older adults. Perhaps spoken dialogue can offer a better user experience?
In the past, researchers found that it was quite difficult to sustain dialogues that relied on automatic speech recognition (ASR) (e.g. see Bickmore & Picard, 2005). However, recent commercial systems like Apple’s Siri and Amazon’s Alexa offer considerable improvements in recognising user’s speech. Such state of the art systems might now be sufficiently reliable for supporting high-quality spoken dialogues between our coaches and the user.
In your project you will adapt the COUCH system to support spoken interactions. In addition to incorporating ASR, you will investigate smart ways to organise the dialog to facilitate adequate recognition in noisy and uncertain settings while keeping the conversation going. Finally, you will also evaluate the user experience and the quality of dialog progress in various settings and thereby the suitability of state of the art speech recognition for live, open-setting spoken conversation.
You will carry out the work with collaboration between Roessingh Research and Development (http://www.rrd.nl/) and researchers at the Human Media Interaction group of the University of Twente.
Dennis Reidsma (email@example.com)
 Timothy W. Bickmore and Rosalind W. Picard. 2005. Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. 12, 2 (June 2005), 293–327. DOI: https://doi.org/10.1145/1067860.1067867
- Large-scale data mining & NLP @ OCLC - Leiden, NL
OCLC is a global library cooperative that provides shared technology services, original research and community programs for its membership and the library community at large. Collectively with member libraries, OCLC maintains WorldCat, the world’s most comprehensive database of information about library collections. WorldCat now hosts more than 460 million bibliographic records in 483 languages, aggregated from 18,000 libraries in 123 countries.
As the WorldCat continues to grow in quantity, OCLC is actively exploring data science, advanced machine learning, linked data and visualisation technologies to improve data quality, transform bibliographic descriptions into actionable knowledge, as well as provide more functionality for professional cataloguers and develop more services for end users of the libraries.
OCLC is constantly looking for students who are enthusiastic to advance AI technologies for library and other cultural heritage data. Examples of student assignments are:
- Fast and scalable semantic embedding for Information Retrieval
Multi-label Text Classification (XMTC) for automatic subject prediction
image captioning for Cultural Heritage collections
- Entity extraction and disambiguation
- Entity matching across different media (e.g. books, articles, cultural heritageobjects, etc) or across languages
- Hierarchical clustering of bibliographic records
- Constructing knowledge graphs around books, authors, subjects, publishers, etc.
- Interactive visualisation of library data on geographic maps and/or along a time dimension
- Concept drift (i.e., how meaning changes over time) and its effects on Information Retrieval
- Scientometrics-related topics based on co-authoring networks and/or citation networks
More details are available on request.
Contact: Shenghui Wang
- Robotics and mechatronics @ Heemskerk Innovative Technology (Delft)
Heemskerk Innovative Technology provides advice and support to innovative high-tech projects in the field of robotics and mechatronics. Our mission: Convert basic research into innovative business concepts and real-world applications by creating solutions for performing actions where people themselves can not reach: making the world smaller, better integrated and in an intuitive way.
Example assignments (to be carried out in the first half of 2021):
Current assignments focus on user robot interaction, object detection and autonomous object manipulation in real-life settings, human detection and tracking for navigation in human-populated environments as part of developing the ROSE healthcare robot. Background on C++/Python and ROS is a pre for students working on these assignments.
Mariët Theune (EEMCS) <firstname.lastname@example.org>
- Tasty Bits 'n' Bytes: Food Technology @ het Lansink - Enschede, NL
The current popularity of ICTs that offer augmented or virtual reality experiences, such as Oculus Rift, Google Glass, and Microsoft Hololens, suggests that these technologies will become increasingly more commonplace in our daily lives. With this the question arises of how these mixed reality technologies will be of benefit to us in our day-to-day activities. One such activity that could take advantage of mixed reality technologies is the consumption of food and beverages. Considering the fact that the perception of food is highly multisensory, being not only governed by taste and smell, but to a strong degree by our visual, auditory and tactile senses, mixed reality technologies could be used to enhance our dining experiences. In the Tasty Bits and Bytes project we will explore the use of mixed reality technology to digitally (using visual, auditory, olfactory, and tactile stimuli) enhance the experience of consuming food and beverages.
The setting for these challenges and projects is a mixed reality restaurant table at Het Lansink that hosts a variety of technologies to provide a novel food and beverage experience.
Assignments that can be carried out in collaboration with Het Lansink concern, for example: actuated plates; projection mapping on the table; force feedback; and multimodal taste sensations.
Contact: Dirk Heylen, Juliet Haarman
- Addiction, Coaching and Games @ Tactus - Enschede, NL
Tactus is specialized in the care and treatment of addiction. They offer help to people who suffer from problems as a result of their addiction to alcohol, drugs, medication, gambling or eating. They help by identifying addiction problems as well as preventing and breaking patterns of addiction. They also provide information and advice to parents, teachers and other groups on how to deal with addiction.
Assignment possibilities include developing game-like support and coaching apps.
Contact: Randy Klaassen
- Enhancing Music Therapy with Technology @ ArtEZ - Enschede, NL
ArtEZ School of Music has a strong department in Neurologic Music Therapy, which does not only train new therapists but also engages in fundamental research towards evaluating and imrpoving the impact of Music Therapy.
Music is a powerful tool for influencing people. Not only because music is nice, but also because music actually has neurological effects on the motor system. That is why music therapy is also used as revalidation instrument for people with various conditions. In this assignment you will work with professional music therapists, in developing interactive products to enrich the music therapy sessions for various purposes.
Possibliities include design, development and research assignments on systems such as home practice applications, sound spaces for embodied training, sensing to provide insights to the therapist and/or feedback to the client, etcetera.
Contact Dennis Reidsma
- Stories and Language @ Meertens Institute - Amsterdam, NL
The Meertens Institute, established in 1926, has been a research institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 1952. They study the diversity in language and culture in the Netherlands, with a focus on contemporary research into factors that play a role in determining social identities in the Dutch society. Their main fields are:
- ethnological study of the function, meaning and coherence of cultural expressions
- structural, dialectological and sociolinguistic study of language variation within Dutch in the Netherlands, with the emphasis on grammatical and onomastic variation.
Apart from research, the institute also concerns itself with documentation and providing information to third parties in the field of Dutch language and culture. We possess a large library, with numerous collections and a substantive documentation system, of which databases are a substantive part.
Assignments include text mining and classification and language technology, but also usability and interaction design.
Website of the institute: http://www.meertens.knaw.nl/cms/
Contact: Mariët Theune
- Language and Retrieval @ Elsevier - Amsterdam, NL
Elsevier is the world's biggest scientific publisher, established in 1880. Elsevier publishes over 2,500 impactful journals including Tetrahedron, Cell and The Lancet. Flagship products include ScienceDirect, Scopus and Reaxys. Increasingly, Elsevier is becoming a major scientific information provider. For specific domains, structured scientific knowledge is extracted for querying and searching from millions of Elsevier and third-party scientific publications (journals, patents and books). In this way, Elsevier is positioning itself as the leading information provider for the scientific and corporate research community.
Assignment possibilities include text mining, information retrieval, language technology, and other topics.
Contact: Mariët Theune
- Interactive Technology for Music Education @ ArtEZ - Enschede, NL
The bachelor Music in Education of ArtEZ Academy of Music in Enschede increasingly profiles itself with a focus on technology in service to music education. Students and teachers apply digital learning methods for teaching music and they experiment with all kinds of digital instruments and music apps. Applying technology in music education goes beyond the application of these tools. Interactive music systems have potential in supporting (pre-service) teachers in teaching music in primary education. Still, much research needs to be done.
Current questions include: What is an optimal medium for presenting direct feedback on the quality of rhythmic music making? What should this feedback look like?
HMI students are warmly invited to contribute to this research by creating applications concerning feedback and visualisations for rhythmic music making in primary education. Design playful, interactive musical instruments to engage children to play rhythms together. Or come up with interactive (augmented) solutions that support teachers in guiding children making music.
You work in collaboration with one of the main teachers in the bachelor Music in Education who is doing his PhD project on this topic.
Contact: Benno Spieker, Dennis Reidsma
- Using (neuro)physiological signals @ TNO -- Soesterberg, NL
At TNO Soesterberg (department of Perceptual and Cognitive Systems) we investigate how we can exploit physiological signals such as EEG brain signals, heart rate, skin conductance, pupil size and eye gaze in order to improve (human-machine) performance and evaluation. An example of a currently running project is predicting individual head rotations from EEG in order to reduce delays in streaming images in head mounted displays. Other running projects deal with whether and how different physiological measures reflect food experience. Part of the research is done for international customers.
More examples of projects as reflected in papers are on Google Scholar
We welcome students with skills in machine learning and signal processing and/or who would like to setup experiments, work with human participants and advanced measurement technology.
Contact: Jan van Erp <email@example.com>
- Social VR User Experiences @ TNO -- Den Haag, NL
In the TNO MediaLab (The Hague), we create innovative media technologies aimed at providing people with new and rich media experiences, which they can enjoy wherever, whenever and with whomever they want. To enable this, we mainly develop video streaming solutions, working from the capture side to the rendering side, looking at many of the aspects involved: coding, transport, synchronisation, orchestration, digital rights management, service delivery, etc.. In many cases, we do this work directly for customers such as broadcasters and content distributors.
As part of this, we currently work on what we call Social VR, or VR conferencing. Virtual Reality applications excel in immersing the user into another reality. But passed the wow-effect, users may feel the lack of the social interactions that would happen in real life. TNO is exploring ways to bring in the virtual world the experience of sharing moments of life with friends and family. We do this using advanced video-based solutions.
We are actively looking for students to contribute to:
- evaluating, analysing and improving the user experience of the services developed, e.g. working on user embodiment, presence, HCI-aspects, etc.
- the technical development of the platform (i.e. prototyping), e.g. working on spatial audio, 3D video, spatial orchestration, etc.
Focus of assignments can be on one aspect or the other, or a combination of both.
More info of what we do at TNO Medialab can be found here: https://tnomedialab.github.io/
Contact: Jan van Erp <firstname.lastname@example.org>
- AR for Movement and Health @ Holomoves - Utrecht, NL
Holomoves is a company in Utrecht that combines Hololens Augmented Reality with expertise in health and physiotherapy, to offer new interventions for rehabilitation and healthy movement in a medical setting. Students can work with them on a variety of assignments including design, user studies, and/or technology development.
More information on the company: https://holomoves.nl/
Contact person: Robby van Delden, Dennis Reidsma
- Artificial Intelligence & NLP @ Info Support - Veenendaal, NL
Info Support is a software company that makes high-end custom technology solutions for companies in the financial technology, health, energy, public transport, and agricultural technology sectors. Info Support is located in Veenendaal/Utrecht, NL with research locations in Amsterdam, Den Bosch, and Mechelen, Belgium.
Info Support has extensive experience when it comes to supervising graduation students. With assignments that do not only have added scientific value, but also impact the clients of Info Support and their clients’ clients. As a university-level graduating student, you will become part of the Research Center within Info Support. This is a group of colleagues who, on top of their job as a consultant, have a strong affinity with scientific research. The Research Center facilitates and stimulates scientific research, with the objective of staying ahead in Artificial Intelligence, Software Architecture, and Software Methodologies that most likely will affect our future.
Various research assignments in Artificial Intelligence, Machine Learning and Natural Language Processing can be carried out at Info Support.
Examples of assignments include:
- finding a way to anonymize streaming data in such a way that it will not affect the utility of AI and Machine Learning models
- improving the usability of Machine Learning model explanations to make them accessible for people without statistical knowledge
- generating new scenarios for software testing, based on requirements written in a natural language and definitions of logical steps within the application
More details are available on request.
Contact: Mariët Theune