If you as a bachelor or master student are looking for a final thesis assignment, capita selecta, research topic, or internship, you can choose between a large number of internal (at HMI) or external (at a company, research institute, or user organisation) assigments. You can also choose to create one yourself. Searching a suitable location or a suitable topic often starts here: below you find a number of topics you can work on.
Note that in preparation for a final MSc assignment you must carry out a Research Topics resulting in a final project proposal.
ASSIGNMENTS AND TOPICS THAT CAN BE CARRIED OUT INTERNALLY AT HMI
- Elektronische tafeltennistafel @ HMI - Enschede, NL
ACHTERGRONDINFORMATIE
Tafeltennisvereniging Blauw-Wit (1948) uit Almelo heeft ongeveer 75 leden en is actief in landelijke en regionale wedstrijden. Jaarlijks worden scholen bezocht en schoolwedstrijden georganiseerd om de jeugd te interesseren voor de tafeltennissport. Dit is succesvol en levert nieuwe leden op. Bij de werving is de inzet van technische hulpmiddelen al gebruikelijk. Zo kan een ‘opslag-robot’ het reactievermogen en de speelvaardigheid van beginnende en gevorderde tafeltennisliefhebbers naar een hoger niveau brengen.PROBLEEM (AMBITIE) OMSCHRIJVING
Om de wervingscampagnes uit te breiden en meer leden te werven, denkt Blauw-Wit aan de inzet van een intelligente, aantrekkelijke, programmeerbare tafeltennistafel waarop meerdere functionaliteiten aangeboden kunnen worden. De tafel zou bijvoorbeeld gedurende enkele weken op een basisschool of middelbare school kunnen staan. Leerlingen kunnen dan:- competities spelen,
- vaardigheden oefenen (zoals het serveren in verschillende vakken op de tafel),
- hun persoonlijk score geschiedenis bijhouden en vergelijken met vrienden,
- hun speelsnelheid en reactievermogen oefenen,
- diverse spellen kunnen spelen, zowel individueel als met z’n tweeën of in teamverband (“bijvoorbeeld rond de tafel spelen”).
OPDRACHTOMSCHRIJVING
De tafel moet gemakkelijk te vervoeren zijn en gemakkelijk op te zetten en aan te sluiten zijn. Door het inzetten van: sensoren en camera’s, een goede dataverwerking, compatibiliteit met telefoons en Artificial Intelligence, kan een variëteit van spel- en wedstrijdvormen aangeboden worden. Op deze manier kan de interactieve tafel de voorlichting op de scholen versterken. Leerlingen krijgen zo de kans gedurende een langere periode intensief met de tafeltennissport kennis te maken waarbij de tafel de wervingslessen ondersteunt.Aanbevolen Opleidingsrichting
Human-Media Interaction of Creative TechnologyDeze opdracht wordt in eerste instantie bij de Human Media Interaction groep van de Universiteit Twente uitgevoerd.
Volg je een andere opleiding, maar denk je dat je de uitdaging aankan? Aarzel niet om contact met ons op te nemen!
Startdatum
Zo snel mogelijkBen je geïnteresseerd in de opdracht of heb je een vraag?
Neem contact op met Egbert of met Dennis!Coördinator SMART
Egbert van Hattem
e.vanhattem@novelt.com
+31 (0)6 39 10 81 17Universiteit Twente
Dennis Reidsma
d.reidsma@utwente.nl
Human Media Interaction/Creative Technology - Mitigating Unpredictable Robot Actions for Fluent Human-Robot Interaction @ HMI - Enschede, NL
Description
Predictability in human-robot interactions is considered to be essential for humans to understand, for coordinating actions with robots, for improving task performance, safety, and trust in the robot. Moreover, it influences our social perception of the robot. When interaction pattern is designed beforehand and is fixed, we can design it to be highly predictable. For instance, by having few types of robot actions without a variations. However, when the interaction pattern is not predetermined, unpredictable robot actions are more likely to happen. For instance, when robot motion trajectories are generated through kinematics. When people collaborating with the robot are unable to predict the robot's behaviour, they might no longer trust the robot or safely coordinate their actions with it.Robot actions that the user did not predict are unavoidable to a certain extent. However, there are actions that the robot could take in order to avoid the negative effects of an unpredicted robot action. For instance, through an anticipatory action or communicating prior intent. The goal of this project is to investigate how a robot could mitigate its unpredictable actions, implement such strategies, and assess them in an human-robot interaction experiment.
Tasks
The student will carry out the following:- Review literature on robot predictability and robot behaviour design, and cognitive science on the generation of predictions in the human brain.
- Come up with several strategies that could mitigate the unpredictive value of a robot action.
- Implement these strategies in a robot and design a human-robot collaborative interaction.
- Conduct a user experiment using this setup.
- Analyse the data and write up the results in a master’s thesis
Supervisors: Bob Schadenberg and Dennis Reidsma
Contact: Bob Schadenberg, b.r.schadenberg@utwente.nl - Voice, Face, and Theater performance @ HMI - Enschede, NL
Voice, Face, and Theater performance
in connection to Jonathan Reus, Artist in Residence 2023
Contact: info@JonathanReus.com, d.reidsma@utwente.nlHow can a vocal performer, such as an opera singer or a spoken word artist, move fluidly from their biological voice to an artificial one? How can performers inhabit multiple vocal identities simultaneously, or become completely without a stable persona? How can wearable robotics, such as light-weight robotic masks, be used as part of theatrical costumes to further distort the identity of the performer? And how can all of these approaches lead to a sense of uncanny perception and sensory delight in the audience? I will be artist in residence at the UT throughout 2023 exploring these questions, with the goal of creating new live performances. We will be building expressive and performative artificial voice models, real-time voice manipulation systems, and wearable performance technologies such as sensor-skins for controlling voice, or robotic masks. In addition we will consider how to create voice datasets for artistic research into expressive, performative voice AI, and hopefully release an open vocal dataset ourselves as part of the residency.
All of the project topics below are within my expertise and related to the work I will be doing throughout 2023, and I would be very happy to help with students who are interested in researching the same topics as part of a MSc or BSc project. All the topics are very broad and can be approached with a focus on whatever the student's background is. For example, a student with expertise in computer science might focus on developing an expressive and controllable artificial voice model, a student with knowledge of design methods might focus on how to design a wearable voice control interface for a singer, while another student might focus on studying the performer's perception of embodiment when using an artificial voice.
You can find a bit more information about Jonathan online at jchai.me
Detailed Project Topics
Below are some potential projects for MSc and BSc students which could be guided with the help of Jonathan and contribute to his artist residency at UT through 2023.Expressive, Controllable AI Voice for the Arts
Most work on voice synthesis focuses on intelligible speech, with the idea of "expressivity" meaning mostly the ability to speak in different emotions; however "emotion" is not enough for artistic use, the human voice can do so much more. This project would explore how to develop, or modify existing machine learning text-to-speech models to be able to reproduce a wider range of human vocal expression beyond simple emotional categories, for example, by focusing on control over paralinguistic parameters. This topic would be ideal for students with a strong background in machine learning and computer science who might want to develop models, who want to explore new ways of interacting with existing voice models, or who want to do research into the nature of non-verbal voice.Open Datasets for Artistic Voice Research
This project is "the other side of the coin" to the one above: addressing how to create open voice datasets useful for training and testing artistic voice models. While there are a few publicly available speech datasets out there, they are oriented to tasks such as speech synthesis and recognition, while in artistic practice the human voice can do much more than simply speak. In fact, we don't yet have a good idea what would need to be in a voice dataset for artistic use! A student taking on this topic could try to answer this question - by creating a new dataset** that fits their hypothesis of what notations and vocalisations an expressive voice dataset should include. A student with a computer science background might also want to explore ways of repurposing existing voice datasets, for example by extracting artistically useful features such as pitch, articulation and energy.Designing Wearable Interfaces for Voice Performers
Vocal performers are usually in motion, gesturing their hands while telling a story, moving fluidly across a stage, or standing in an ensemble. Standard electronic music controllers are usually devices that tie you to a table, and simply do not fit the mobility and nuance needed by a vocalist. This project will investigate the creation of wearable, lightweight and natural-feeling interfaces for electronic voice performance that explore ways in which sensory technologies can become an intimate part of the vocalist's performative expression. Techniques that could be explored include: wearable "sensor skins" that respond to nuances of finger movement and touch, or wearable "sensor masks" that allow the vocalist to control synthetic voice by touching or manipulating the face.Embodied Ownership of AI Voices
From vocal deepfakes to real-time voice skins, artificial voice synthesis is at an uncanny state of believability, opening up a unique frontier for artistic works exploring vocal avatars, personas and embodied ownership of other voices. This research project would explore the perception of such artificial voice phenomenon from the perspective of audience and/or performer. Asking questions such as: What makes an artificial voice feel like it "belongs" to "me"? Or, when watching a performer working with artificial voice, for example through lip syncing or computer voice transformation, what makes it seem like that voice "belongs" to that performer? These are psychological, cognitive and perceptual questions that can be addressed in many ways. The student may also wish to investigate more emotional or poetic aspects of artificial voice ownership, such as imagining how the change in identity of embodying a new voice persona can empower individuals.Watermarks for Accountable Voice Data
It has become the norm that spectacular, high-profile AI projects depend on data obtained under questionable circumstances from individuals: for example, through large-scale internet scraping, or repurposing the personal data of users. The attitude of these initiatives seems to echo a certain Silicon Valley ethics to "move fast and break things", or "ask forgiveness rather than ask permission". This research project involves addressing the lack of accountability and ethical data use in voice data from the ground up: by re-inventing the file formats used to encode voice data. The student will investigate ways of encoding the myriad wishes of voice data owners as an irremovable watermark in the data itself, using techniques like, for example, direct data-to-audio encoding and embedding schemes. The goal would be to create ways of embedding metadata describing fair-use and user wishes into voice recordings. Wishes that are extremely difficult, or impossible to remove from the audio itself.**Note: For all voice-related topics the student has the opportunity to collaborate with the conductor of the student choir Musilon, voice actors within the student theatrical community, and vocalists who are part of the artist residency in order to do their research or create their datasets. Part of creating datasets will also involve research into the steps that must be taken in order to ethically open source a dataset containing the voice recordings of individuals.
Designing Wearable Robotics that Challenge Fixed Identities
Masks have been used across human cultures for thousands of years as a way to inhabit (or be inhabited by) the identity of another. However, what if a mask was not made to assume a single identity, but multiple ones? This research project investigates the design of wearable masks that embody such an "unstable" identity. The masks themselves could be created in a number of ways: for example, through light-weight mechatronics, tensegrity-inspired mechanical mechanisms, soft robotics or as a form of costume design. The student may also want to focus on more perceptual research into the psychology of how humans perceive faces, and what rearrangements of facial structures lead to strange and uncanny experiences in an audience.Designing Real-time Facial Avatars / Facial Puppets
This research project investigates development of real-time facial avatar technologies for use in the performing arts. The main question here is how to remotely control, or "puppeteer" mask-like creations such as those described in the previous topic. The performing arts have specific unique requirements for facial control. Tracking interfaces should be stable, lightweight, portable and even wearable. It may also be that the student is more interested in controlling virtual facial avatars (e.g. which could be projected during performances or appear in VR). Even if displayed on a screen, the projected avatars must be responsive and expressive, and be interoperable with different softwares used by creative practitioners in the digital arts. - A rich research environment for multi-person rowing in VR @ HMI - Enschede, NL
Interaction Technology has great potential for sports training and performance. For example, the Wavelight technology supports runners in matches; VR can be used to train in rowing or in soccer; amateur runners often make use of smart watches and sports trackers; and systems such as the FitLight are used for reaction training in various sports. In this project we work with the combination of Virtual Reality, rowing on ergometers, and various sensors to measure the rower's actions and performance.
Context
Recent studies in sports HCI illustrated that athletes and coaches use (and are open to) further use of virtual reality (VR) in training. The advantages of using VR in sports training can be immense, especially in skill development and coaching as it can simulate real life environments while being perfectly adaptable and systematically configurable. The proposed assignment is part of the "Rowing Reimagined" project, jointly carried out by the UT and the VU, in which a research platform is developed for multi-person rowing in VR using ergometers. On the one hand, this platform aims to offer a diversity of VR environments, tasks, and feedback for novel forms of training. On the other hand, the versatile setup can be used to systematically do fundamental research into the conditions and determinants of performance in rowing.
For example, by introducing an opponent boat into the virtual reality, and systematically varying the parameters of its "overtaking behavior", we can fundamentally research the effects of stressors on rowing performance of the rowing team, or we can use the same opponent boat models for offering a novel settings in which to train athletes how to cope with such stressors. An unlimited amount of other variables could similarly be explored -- a more complete list can be found at the end of this assignment proposal.
=This assignment=
The proposed assignment focuses on realising the richest possible research environment for multi-person rowing in VR, by exploring, developing, and pilot testing multiple features in the platform (such as the above mentioned opponent boat) that can contribute to novel training technology or to fundamental research. This requires an iterative approach. For example, it is not enough to simply state that "an opponent boat model must be added to enable reseach into the role of stressors in rowing performance". What exactly should be the speed of overtaking? What gives the most realistically stressing effect on the athlete? At what distance of the athlete's boat does the stressor effect take place? Is this individually determined? Do we need a calibration phase to personalise the overtaking behaviour of the opponent boat to the current athlete using the system?
Such questions must be explored on the basis of literature and expert input, and a specific design of the platform feature should iteratively be developed to see how it works out in practice. In the assignment, multiple platform features will be addressed one by one to make each fitting enough to contribute to future science. After a brief pilot study to show the potential of the new feature, a next feature will be taken up and worked out, based on a grounded idea of what feature will be useful for relevant research with the platform.
=The current platform=
The assignment starts with our existing platform which is still being extended. The current platform consists of a technological setup with two ergometers, a social VR setup in which two rowers can be virtually present in a single boat, a few initial measurement components to gather information about the power / effort of the rowers, and some initial virtual elements in the environment. Each of these components may be enhanced and extended as part of the assignment, or completely new components can be added.
Inventory of some facets that may be chosen as part of this assignment
- The "experience of realism" of the resulting rowing activity. Obviously, rowing on an ergometer in VR is not really the same as rowing in a boat on water. This "unreality" might have impact on the athletes’ transition of the improved skills from virtual to reality. What features would intuitively be considered important for the experience of realism? What do rowers feel is important for the experience of realism? When is a certain feature considered minimally adequate regarding realism? This may consider environmental factors, the movement of the boat, the representation of the other athlete in the setup, the sound, etcetera etcetera.
- Rich opponent behaviours that are realistic and meaningful, and that potentially impact the objective or subjective rowing activity.
- Measuring the sense of stress and tension during the rowing activity through a combination of objective (biophysiological?) sensors and in-action momentary self assessment.
- Measuring power, effort, and exertion (objectively or subjectively; posthoc or in-action).
- Measuring and influencing sociality and perceived togetherness in rowing. In the future we would like to experiment with synchrony and the perception of togetherness in rowing. But how can we measure objective and subjective togetherness, eg in the form of social unity or flow, or objective measures of synchronisation? What features may contribute to, or detract from these measures?
- Multiple feedback mechanisms about the joint rowing and their possible parameters. Can be in many modalities: sound, haptic, etcetera.
- And many more.
Contact:
Dees Postma (d.b.w.postma@utwente.nl)
Dennis Reidsma (d.reidsma@utwente.nl)
Armağan Karahanoğlu (a.karahanoglu@utwente.nl)
Robby van Delden (r.w.vandelden@utwente.nl) - Beep beep boop: Semantic-Free Utterances for Social Agents @ HMI - Enschede, NL
In the field of human-robot interaction, the development of semantic-free utterances (SFU) has been gathering attention.R2D2 from Star Wars is a good example of an agent who uses SFU to communicate, another example is how Sims communicate in the videogame by the same name. Some advantages of using SFU over natural language are: the expected intelligence of the system decreases to a more realistic one, it could be more widely understood (not bound to any language), and the user becomes the intelligent other in the interaction (less weight on the robot's ability to process information). There are several topics that we can address in this area, but some research directions that we are interested in are:
- The creation of SFU along with performers (improv, opera, theatre groups, DJs in UT). How does it compare to existing ones? How would the development of non-semantic speech change depending on who is designing it? How would an opera singer vocalize the emotions they need to convey? How would it be different from how an improv actor does it?
- While the point of using non-semantic speech in multicultural spaces is mentioned as an advantage, few studies have tested this with participants of different cultural backgrounds. How does culture play a part in how well we understand SFU? Are there any differences on how people of different cultural backgrounds design SFU? Does exposure to western culture influence how well we understand SFU (created in the Global North)?
- The design of gender-neutral robots has also gained more traction, but gender-neutral voices that are perceived as such are difficult to design. Are SFU more easily perceived as gender neutral in comparison with natural language? Which type of SFU is more fit to convey gender neutrality? Will the addition of a robot embodiment change this perception?
If you are interested in one of these (or similar) topics feel free to contact Hideki Garcia Goo (h.garciagoo@utwente.nl) or Khiet Truong (k.p.truong@utwente.nl).
- Accessing large-scale knowledge graphs via conversations in virtual environments @ HMI - Enschede, NL
In the past decades, more and more Cultural Heritage institutions, such as libraries, museums, galleries and archives, have launched large-scale digitisation processes that result in massive digital collections. This not only ensures the long-term preservation of cultural artefacts in their digital form, but also allows online instant access to the resources that otherwise require physical presence and foster the development of applications like virtual exhibitions and online museums. By embracing Linked Open Data (LOD) principles and Knowledge Graph (KG) technologies, rich legacy knowledge in these CH collections has transformed into a form that is shareable, extensible and easily re-usable. Relevant entities (e.g. people, places, events), their attributes and their relationships are formally represented using international standards, resulting in knowledge graphs that both humans and machines are able to understand and reason about.
However, exploring large-scale knowledge graphs is not trivial. Traditional keywords based search is not ideal for exploring such graph-structured data. A museum visitor may start his/her exploration from certain creative work to different types of entities that it associates with, such as its creator, the place where it was created, or relevant events that happened at the time when it was created, etc. The visitor can follow the links in the knowledge graph to discover what is interesting to them. Thanks to the LOD principles, some external knowledge graphs may also be briefly accessed. What is challenging is how to make this huge amount of information accessible to visitors in an appealing and intuitive manner, so that the interaction between visitors and the knowledge graphs becomes meaningful and enjoyable. Such interaction needs to take into account the visitors' cognitive and information processing capabilities as well as their personal interests and cultural backgrounds.
At HMI, we are investigating how to access large-scale KGs via natural language conversations in virtual environments and welcome students to research on different aspects of this research:
- Developing a KG-based conversational museum guide that models user's interests, introduces art objects, answers questions, provides recommendations, etc.
- User interest modelling Conversational KG-based question answering and recommendations Natural language generation with dynamic subgraph extraction Mixed initiative dialog management with KGs
- Integrating conversational agent in a virtual reality environment
- Multi-model input for user interest detection, including the user's utterances, eye-gaze, speech emotion, gesture, etc. Multi-model responses in virtual reality, including text, voice, highlights, etc.
- Effective and ethical design in collaboration with Humanities researchers and/or Cultural Heritage institutions.
- Effective or inappropriate: Using visitors’ (cultural?) background / profile to generate personalised narratives Methods for visitor-agent interaction to allow for collaborative creation of narratives about cultural history → this could include research on community-driven artifact labelling / label correction, object selection etc. Research on increasing affective impact of interactive virtual environments (for cultural heritage), with the goal of
- increasing visitors’ a) knowledge, b) sense of responsibility, c) sustained interest d) other positive outcome on topic of inclusivity
- Increasing interest and participation of different groups of people that may not typically feel attracted or included in museum exhibitions - e.g. children
Contact: Shenghui Wang (shenghui.wang@utwente.nl)
- Can I touch you online? - Investigating the interactive touch experience of the “Touch my Touch” art installation @ HMI - Enschede, NL
We touch the screen of our cell phone more often than we touch our friends. We stroke and 'swipe' our screen in search of a loved one. Meanwhile, in public spaces, we touch each other, and watch each other being touched less and less. Pandemic regulations have only further increased this physical isolation. “Touch my touch” or TouchMyTouch.net, designed by artist duo Lancel and Maat, is a critical new composition of face recognition, merging and streaming technologies for a poetic encounter through touching and being touched. TouchMyTouch.net is a Streaming Platform for Online Touch and Interactive installation for the streaming platform. The interactive installation will be at the UT for a number of weeks during the second semester of 2022. You can try the online platform with a partner here: https://upprojects.com/projects/touch-my-touch
This master assignment is in relation to the physical “touch my touch” installation. You will define a research question in relation to the touch experience evoked by the art installation. For more information send an email to Judith Weda (j.weda@utwente.nl).
- Coaching for breathing with haptic stimulation @ HMI - Enschede, NL
Breathing has a fundamental contribution to well-being, both physiologically and psychologically. Accordingly, also a number of techniques for well-being build on breathing-techniques, like yoga, TaiChi, meditation, Wim Hof method, and many more. There are also a number of technological products available that offer support for breathing, such as the Spire or apps in smart watches.
With this master project we want to explore the possibilities to support breathing by haptic stimulation and feedback. Stimulation can be used to teach breathing patterns, feedback to signal whether breathing is or is not in the intended range. For this work we will focus on vibration motors for haptic stimulation. Relevant questions to answer here are: where to position vibration motors, what stimulation and feedback patterns are comfortable and effective?
Contact: Angelika Mader, a.h.mader@utwente.nl
Supervisors: Angelika Mader, Geke Ludden
- MiniSoccerbal 2.0 @ HMI - Enschede, NL
Supervisors: Robbert van Middelaar & Aswin Balasubramiam, Jasper Reenalda, Dennis Reidsma
Student assignments: Bachelor's / master's
Educational programme: (BMT/BME), EE, CreaTe, iTech, CSProject summary
This project focuses on extending and improving the already existing MiniSoccerbal (www.MiniSoccerbal.com). This MiniSoccerbal is a small ball (size 2) that is connected to a cord. With this cord, a player can control the ball in his/her nearest space. The end of the cord is always connected to the player, therefore the ball will never leave the area around the player. The MiniSoccerbal is already used by a lot of youngsters (7-13 y/o) of (professional) football clubs. Youngsters can already use over 100 training exercises with the MiniSoccerbal.Currently, relevant parameters are not measured to analyse the performance. To extend beyond its current capability, the MiniSoccerbal needs to be improved and should measure useful data, allow users to view the data, and interact with the data and other people using an interactive application.
Project overview
A first project focused on instrumenting the ball with a sensor, to count the number of touches and velocity of the ball. This system worked quite well, however, we want to extend this to the next level, using deep learning models and your own phone camera.It is a possibility to use a camera to track objects. You can think of security cameras tracking people but there also exist methods to track for example a football on a pitch.
This project will focus on deep learning models to track the ball with your own phone camera. In this way, we can objectify the movements of the MiniSoccerball, like the number of touches, the velocity of the ball or an evaluation if the exercise is performed correctly, without any sensors attached to the player or ball.
This project will be a collaboration between Twentsche Voetbal School/Minisoccerbal, Biomedical Signals and Systems (BSS) and Human Media Interaction (HMI) and is part of the Sports, data & interaction project. The methods will also be implemented in other projects, to quantify other sports movements.
Experience with deep learning models and programming is required.
Contact: r.p.vanmiddelaar@utwente.nl and a.balasubramaniam@utwente.nl
More information can be found on: http://sports-data-interaction.com/research
- Mediated social touch through a physical avatar and a wearable @ HMI - Enschede, NL
The aim of this master project is to develop a fully functioning social haptic device divided in two main components, a haptic coating for a humanoid robot serving as an avatar and a wearable haptic vest/suit. This project’s workload will be divided in four parts: (1) the design and development of a social touch coat for the robot body, (2) the selection or development of haptic sensors to be dispatched on the avatar’s upper body, (3) designing and developing a pneumatic haptic vest that can be used for both perception experiments and mediated touch and (4) testing and experimenting with a TacticSuit haptic vest (from BHaptics [vibration]). You will find more information on each part below. We intend to have one student working on each part. We support and encourage collaboration between students within the project.
Xprize
The ANA Avatar Xprize is an International Competition where different teams design and develop their own fully functioning avatar and test it in different scenarios from maintenance tasks to human-human remote interactions. HMI participates in team i-Botics that qualified for the semifinals in September 2021. During this competition, we develop our avatar’s ability to allow advanced social remote interactions between the remote controller of the avatar and the recipient at the avatar location. To that end, this master project has been designed to focus on one important aspect of social interaction, which is touch.You can find more information on https://avatar.xprize.org/prizes/avatar
WEAFING
The WEAFING project is an EU Horizons 2020 project that aims to develop a wearable for social touch made out of electroactive textile. Electroactive textile is a textile woven or knitted with electroactive yarn. This yarn contracts or expand when an electrical current is applied. Depending on the morphology of the textile we can imagine different types of haptic sensations on the skin. The current interest is the pressure sensation that the garment could generate.At the UT we do perception research which is key in order to define the specifications of the electroactive textile. Since the textile is still in development we use substitute materials to explore and find the perception parameters of pressure applied to different parts of the body by psychophysical studies.
You can find more information on weafing.eu
Part 1 : Designing a social touch skin for a humanoid robot avatar
This part of the project concerns the design, production and test of a humanoid robot avatar’s “skin” to be used during social touch interaction between the avatar (piloted by a controller) and a recipient who is actually touching the robot. For this part, we expect the student to realize a study on different materials and sensors that could be used for the coating, to design the required product and to make a test with a physical robot.
There are no clear limitations on the selection of the material. However, some criteria of measurement will be clearly emphasized during the project. We will offer help and collaboration in the search for an adequate material. For the sensors, the selected method should be dispatched on the whole upper body of the robot and should be as less intrusive as possible. There may be multiple way to approach this task. We expect the student to find a viable and efficient solution considering the limitations that will be provided during the project such as weight, shape or size of the sensors. Some ways may also be available at the HMI department such as the CapSense vest as a start for the investigation.
We are looking for master students in interaction technology and/or embedded systems. Help will be provided with sewing and designing the “skin”. However, we will consider experiences with sewing, haptic interaction and sensor’s data analysis as a plus.
For more information, please contact Camille Sallaberry, c.sallaberry@utwente.nl
Part 2 : Developing a pneumatic haptic vest for human-human remote touch interaction and psychophysical experiments
For this graduation assignment we are looking for a student to create a haptic (touch) vest with pneumatic actuators. The goal is to use the vest for both psychophysical experiments and mediated touch. The assignment would be a collaboration between two projects namely the WEAFING (weafing.eu) project and the UT entry for the X-prize.
In the weafing project we are developing a textile wearable that can give haptic feedback. In order to do this we need to do perception experiments for pressure on the skin using psychophysical methods to find the parameters of touch perception. A pneumatic vest can help us with these experiments.
Following these experiments we can use the vest for mediated touch applications. This includes mediated social touch and other mediated touch. Touch can for example be mediated through an avatar representing you at an alternative location, the use case of the X-prize project.
There are multiple ways to approach the assignment and multiple actuator option like McKibben muscles or silicone pockets. It is key for psychophysical experiments to measure pressure in the actuator and have precise control of pressure in the actuator. The vest should fit both men and women and some variation of body types.
We are looking for master students in interaction technology or embedded systems. We will offer help with sewing and the vest design, but it is key that you have affinity with making. Experience with sewing is a plus.
There is a body of work as a base, namely a project on pneumatic actuator control, a sleeve with McKibben muscles, and silicon actuators.
For more information please contact Judith Weda, j.weda@utwente.nl
Part 3 : Vibratory Suit for human-human remote touch interaction
For this part of the project, we are looking for a student that will investigate the use and flexibility of vibratory motors to reproduce different kind of social contact/touch during social communication in the context of an interaction between a robot-avatar and a human. The student will also be expected to evaluate existing vibratory haptic suits or to develop one for remote social touch experience.
The experience will require the availability to touch the whole upper body of the avatar with exception to the hands. Thus, we are expecting the haptic suit to be designed with only the top and long sleeves. Some criteria of measurement will also be clearly emphasized during the project.
As one of our basic consideration, the student may start to evaluate with the TacticSuit from BHaptics. The aim of the student should be to test the haptic vest/suit for social touch and determine its usability compared to other possible suits.
During the project, we also encourage the student to tightly collaborate with the student on the “pneumatic vest” project as both students may have to evaluate the social touch experience of both products.
We are looking for master students in interaction technology or embedded systems. We will consider experience with vibratory actuators and experience in social haptics as a plus.
For more information, please contact Camille Sallaberry, c.sallaberry@utwente.nl
- Spoken Interaction with Conversational Agents and Robots @ HMI - Enschede, NL
Speech technology for conversational agents and robots has taken a flight (e.g., Siri, Alexa), but we are not quite there yet. While there are technical challenges to address (e.g., how can an agent display listening behavior such as backchannels "uh-uhm", how can we recognize a user's stance/attitude/intent, how can we express intent without using words, how can an agent build rapport with a user), there are also more human-centered questions such as how to design such a spoken conversational interaction, how do people actually talk to an agent or robot, what effect does a certain agent/robot behavior (e.g., robot voice, appearance) have on a child’s perception and behavior in a specific context?
These are some examples of research questions we are interested in. Much more is possible. Are you also interested? Khiet Truong and Ella Velner can tell you more.
Contact: Khiet Truong k.p.truong@utwente.nl
- Automatic Laughter analysis in Human-Computer Interaction @ HMI - Enschede, NL
Laughter analysis is currently a hot topic in Human-Computer Interaction. Computer scientists generally study how humans communicate through laughter and how this can be implemented in Automatic Laughter Detection and Automatic Laughter Synthesis. Development of such tools would be very helpful in fields like Human-Computer/Robot Interaction, where voice assistants like Alexa and Google assistant might understand more complex natural communication through the interpretation of social signals such as laughter and generating well timed and appropriate realistic laughter responses. Another application could be multi-media laughter retrieval, automatically extracting laughter occurrences from large amounts of video and audio data, opening the way for retrieving large laughter datasets. As a final example, laughter detection could also possibly be used to study group behavior or automatic person identification.
However there are several challenges in laughter research that need to be considered when aiming for automatic laughter analysis. For one, annotating laughter is a much discussed challenge for the field, there are debates on how laughter should be segmented and labeled. Are there different kinds of laughs for different situations and how do we label these? Do people have specific laughter profiles? How does context play a role in laughter detection? How could a real-time implementation in a conversational agent look like and for what purpose? Students can choose to go for a more human-centered or technology oriented direction.
This makes achieving automatic laugher analysis an interesting goal. Students are invited to explore the topic of Automatic Laughter analysis and come up with an interesting question or challenge they want to address. You will be supervised by assistant professor Khiet P. Truong, who is an expert in laughter research and SSP and PhD student Michel-Pierre Jansen whose PhD work evolves around human laughter recognition and SSP.
Contact: Khiet Truong k.p.truong@utwente.nl
- Analysis of depression in speech @ HMI - Enschede, NL in cooperation with GGNet Apeldoorn, NL
The NESDO research (Nederlandse Studie naar Depressie bij Ouderen) was a large longitudinal Dutch study into depression in older adults (>60yr old). Older adults with and without depression were followed over a period of six years. Measurements included questionnaires, a medical examination, cognitive tests and information was gathered about mental health outcomes, demographic, psychosocial and cognitive determinants. Some of these measurements were taken in face-to-face assessments. After baseline measurement, face-to-face assessments were held after 2 and 6 years.
Currently, we have a few audio recordings available from the 6-yrs measurement from depressed and non-depressed older persons. We are looking for a student (who preferably has knowledge of Dutch) who is interested in performing speech analyses on these recordings with the eventual goal to detect depression automatically in speech.
This work will be carried out in collaboration with Dr. Paul Naarding, GGNet Apeldoorn.
Contact: Khiet Truong k.p.truong@utwente.nl
Reading material:
- https://nesdo.onderzoek.io/
- https://nesdo.onderzoek.io/wp-content/uploads/2016/08/Comijs-et-al-2011_design-NESDO_incl-erratum.pdf
- Cummins, N., Scherer, S., Krajewski, J., Schnieder, S., Epps, J., & Quatieri, T. F. (2015). A review of depression and suicide risk assessment using speech analysis. Speech Communication, 71, 10-49.
- Low, D. M., Bentley, K. H., & Ghosh, S. S. (2020). Automated assessment of psychiatric disorders using speech: A systematic review. Laryngoscope Investigative Otolaryngology, 5(1), 96-116.
- Cummins, N., Matcham, F., Klapper, J., & Schuller, B. (2020). Artificial intelligence to aid the detection of mood disorders. In Artificial Intelligence in Precision Health (pp. 231-255). Academic Press.
- Master’s Assignments: Design of Smart Objects for Hand Rehabilitation after Stroke @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL
Stroke impacts many people and is one of the leading causes of death and disability worldwide [1]–[3] and in the Netherlands [4], [5]. The predicted acceleration of the ageing population is expected to raise the absolute numbers of stroke survivors that need care [7]. 80% of all stroke patients suffers from function loss and needs professional caregivers [8], [9] and experiences lower quality of life due to their limited ability to participate in social activities, work and engage in daily activities [10], [11].
The hand is the highly functional endpoint of the human arm as it enables a vast variety of daily activities related to high quality of life [12]. Only 12% of stroke patients recovers arm and hand function in the first 6 months [13]. For the remaining, the limited ability to use their hand leads to financial and psychological impact on them and their families, as it limits the execution of daily activities [14]. A treatment with substantial evidence for its effectiveness is CIMT (Constrained Induced Movement Therapy) [15]. CIMT usually employs intensive sessions focused on task-specific exercises, combined with constraining the unaffected hand and forcing patients to use their affected hand. CIMT relies on the principle of ‘use it or lose it’ [16] and requires patients using their affected hand.
So far, attempts in creating effective home training methods focus on the direct translation of clinical exercises to home training, by designing them to be executed regardless of the location of the patient [17]. Monitoring with the use of smart objects [18]–[21] accounts for the lack of direct supervision and gaming and virtual reality elements have been added to make training more challenging [22]. Such methods assume that patients are motivated, able and willing to clear time in their schedule to engage in training, and/or to sit down at a specific location in their house to execute it. We need a new method to apply this principle in a more flexible way by engaging people in clinically meaningful activities in their daily routine. This way, patients will seamlessly perform functional training activities at a much higher dose than can be achieved in clinics.
Our key objective is to develop a new method using smart objects in which training exercises will be seamlessly integrated into the daily routine of a patient at home.
This method will aim to use the performance on these activities as a functional training set over the day, leading to improved hand function and therefore motivation to perform the activities again in the future [23]–[25]. Patients will not have to schedule their training, but the exercise will be part of their regular daily activities. We will do this by investigating a way of transferring clinical exercises to a home setting using smart objects. Smart objects can be integrated into the daily activities of patients and trigger (by design) a certain user behaviour. The focus in our proposal is for these objects to go beyond simply monitoring [18]–[21], and create a stimulating environment where people feel invited to train and intrinsically motivated to perform the task again in the future. Think of a smart toothbrush, that is designed to promote the use of the affected hand and enables operation only when used by this hand! Fundamental research into the transferability of clinical hand rehabilitation to a smart object home-based setting is needed to theoretically underpin our method. Using smart objects and artificial intelligence, personalized health will be more accessible, and the plurality of data will allow future clinicians more flexibility and overall control of the rehabilitation process.
In this assignment, the masters’ student is expected to:
1. Review literature on existing technologies (sensors, actuators, AI, etc.) of smart objects for rehabilitation to identify gaps/opportunities
2. Specify the requirements for design of smart daily objects that can drive seamless rehabilitation with the use of technology
3. Design and validate a product concept in a co-design manner including clinicians, users and developers
What do we offer?
We offer an interdisciplinary network of researchers who among others are experienced in hand rehabilitation and rehabilitation technology (dr. ir. Kostas Nizamis-DPM), Artificial Intelligence, smart technology and stroke rehabilitation (dr. ir. Juliet A.M. Haarman-HMI), and additionally behaviour change and design research (dr. Armağan Karahanoğlu-IxD). Additionally, the student will collaborate close with clinicians from Roessingh Research & Development (RRD) that aspire to be the end users of the product.
Bibliography
[1] S. S. Virani et al., “Heart Disease and Stroke Statistics—2020 Update,” Circulation, vol. 141, no. 9, Mar. 2020.
[2] S. Sennfält, B. Norrving, J. Petersson, and T. Ullberg, “Long-Term Survival and Function After Stroke,” Stroke, 2019.
[3] E. R. Coleman et al., “Early Rehabilitation After Stroke: a Narrative Review,” Current Atherosclerosis Reports. 2017.
[4] “StatLine.” [Online]. Available: https://opendata.cbs.nl/statline/#/CBS/en/. [Accessed: 21-Apr-2020].
[5] C. M. Koolhaas et al., “Physical activity and cause-specific mortality: The Rotterdam study,” Int. J. Epidemiol., 2018.
[6] R. Waziry et al., “Time Trends in Survival Following First Hemorrhagic or Ischemic Stroke Between 1991 and 2015 in the Rotterdam Study,” Stroke, 2020.
[7] A. G. Thrift et al., “Global stroke statistics,” International Journal of Stroke. 2017.
[8] W. Pont et al., “Caregiver burden after stroke: changes over time?,” Disabil. Rehabil., 2020.
[9] P. Langhorne, F. Coupar, and A. Pollock, “Motor recovery after stroke: a systematic review,” The Lancet Neurology. 2009.
[10] M. J. M. Ramos-Lima, I. de C. Brasileiro, T. L. de Lima, and P. Braga-Neto, “Quality of life after stroke: Impact of clinical and sociodemographic factors,” Clinics, 2018.
[11] Q. Chen, C. Cao, L. Gong, and Y. Zhang, “Health related quality of life in stroke patients and risk factors associated with patients for return to work,” Medicine (Baltimore)., vol. 98, no. 16, p. e15130, Apr. 2019.
[12] R. Morris and I. Q. Whishaw, “Arm and hand movement: Current knowledge and future perspective,” Frontiers in Neurology, vol. 6, no. FEB, 2015.
[13] G. Kwakkel, B. J. Kollen, J. V. Van der Grond, and A. J. H. Prevo, “Probability of regaining dexterity in the flaccid upper limb: Impact of severity of paresis and time since onset in acute stroke,” Stroke, 2003.
[14] J. E. Harris and J. J. Eng, “Paretic Upper-Limb Strength Best Explains Arm Activity in People With Stroke,” Phys. Ther., 2007.
[15] G. Kwakkel, J. M. Veerbeek, E. E. H. van Wegen, and S. L. Wolf, “Constraint-induced movement therapy after stroke,” The Lancet Neurology. 2015.
[16] Y. Hidaka, C. E. Han, S. L. Wolf, C. J. Winstein, and N. Schweighofer, “Use it and improve it or lose it: Interactions between arm function and use in humans post-stroke,” PLoS Comput. Biol., vol. 8, no. 2, 2012.
[17] Y. Levanon, “The advantages and disadvantages of using high technology in hand rehabilitation,” Journal of Hand Therapy. 2013.
[18] M. Bobin, M. Boukallel, M. Anastassova, M. Ammi, U. Paris-saclay, and F.- Orsay, “Smart objects for upper limb monitoring of stroke patients during rehabilitation sessions .,” no. August 2018, 2017.
[19] M. Bobin, F. Bimbard, and M. Boukallel, “Smart Health SpECTRUM : Smart ECosystem for sTRoke patient ’ s Upper limbs Monitoring,” Smart Heal., vol. 13, p. 100066, 2019.
[20] M. Bobin, H. Amroun, M. Boukalle, M. Anastassova, and M. A. Limsi-cnrs, “Smart Cup to Monitor Stroke Patients Activities during Everyday Life,” 2018 IEEE Int. Conf. Internet Things IEEE Green Comput. Commun. IEEE Cyber, Phys. Soc. Comput. IEEE Smart Data, pp. 189–195, 2018.
[21] G. Yang, J. I. A. Deng, G. Pang, H. A. O. Zhang, and J. Li, “An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning,” IEEE J. Transl. Eng. Heal. Med., vol. 6, no. May, pp. 1–10, 2018.
[22] L. Pesonen, L. Otieno, L. Ezema, and D. Benewaa, “Virtual Reality in rehabilitation : a user perspective,” pp. 1–8, 2017.
[23] A. L. Van Ommeren et al., “The Effect of Prolonged Use of a Wearable Soft-Robotic Glove Post Stroke - A Proof-of-Principle,” in Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, 2018.
[24] G. B. Prange-Lasonder, B. Radder, A. I. R. Kottink, A. Melendez-Calderon, J. H. Buurke, and J. S. Rietman, “Applying a soft-robotic glove as assistive device and training tool with games to support hand function after stroke: Preliminary results on feasibility and potential clinical impact,” in IEEE International Conference on Rehabilitation Robotics, 2017.
[25] B. Radder, “The Wearable Hand Robot: Supporting Impaired Hand Function in Activities of Daily Living and Rehabilitation,” University of Twente, Enschede, 2018.
- Supporting healthy eating @ HMI - Enschede, NL
Contact: Juliet Haarman (HMI – j.a.m.haarman@utwente.nl), Roelof de Vries (BSS – r.a.j.devries@utwente.nl),
Project Summary:
Eating is more than the consumption of food. Eating is often a social activity. We sit together with friends, family, colleagues and fellow students, to connect, share and celebrate aspects of life. Sticking to a personal diet plan can be challenging in these situations. The social uncomfortableness that is associated with having a different diet than the rest of a group greatly contributes to this. Additionally, it is well known that we unconsciously influence each other while we eat. Not just in the type of food that we choose, even the quantity of the food that we consume, or the speed with which we consume the food is affected by our eating partners.
A variety of assignments is available that focuses on this topic. They are specified below.
The interactive dining table
The interactive dining table is created to open up the concept of healthy eating in a social context: where individual table members feel supported, yet still experience a positive group setting. The table is embedded with 199 load cells and 8358 LED lights, located below the table top surface. Machine learning can be applied to the sensor data from the table to detect weight shifts over the course of a meal, identify individual bite sizes and classify interactions between table members and food items. Simultaneously, the LEDs can be used to provide real-time feedback about eating behavior, give perspective regarding eating choices, or alter the ambience of the eating experience as a whole. Light interactions can change over time and between settings, depending on the composition of the table members at the table or the type of meal that is consumed at the table.
An indication of the assignments that are possible within this topic:
- Adding intelligence to the table. Are we able to track the course of the meal over time? This includes questions such as: How much has been put on the plates of all table members? At what times have they taken a bite? How much are the putting on their fork? Are they going in for seconds? Etc.
Keywords: Machine learning, sensor fusion and finding signal characteristics
- Creating LED interactions that provide the user with feedback about his/her behavior. Which type of interactions work for a variety of target groups? How should interactions be shaped, such that a single subject in a group feels supported? How can we implicitly steer people towards healthy(er) behavior, without coercion, or without putting the emphasis of the meal on it?
Keywords: HCI, user experience, co-design, Unity, sensor signals
- Togetherness around eating or `commensality' is a relatively new direction for HCI research. Recent work has distinguished `digital commensality', eating together through digital technology, and `computational commensality', physical or mediated multimodal interaction around eating. We are currently exploring how commensality mediated by technology can be used to support dietary behavior change, in a broader concept than the interactive table alone. How does commensality influence dietary habits and how can this influence of commensality be used and acknowledged in dietary behavior change technology?
Keywords: behavior change strategies, behavior change technology, commensality, technology-mediated commensality
Wearables to automatically log eating behavior
Gaining insight into the current eating behavior of a person is a first step in accomplishing better health. Professionals still use conventional methods, such as the use of logbooks, for this. They ask the user to manually report on their eating behavior throughout the day. Memory and logging bias are not uncommon factors associated with this method. Often, users simply forget to write down what and when they have been eating. Also, the presence of unknown ingredients in the food, difficulties in estimating portion size or social discomfort while logging the food affect the reliability of this method.
One way to lower the chance for bias, is to use technology that automatically detects events of food intake. Accelerometers on the wrist, strain gauges on the jaw or RIP sensors that monitor the breathing signal of the subject are examples of technologies that are used to identify intake gestures and chewing/swallowing movements – indicating that the user is eating. Many of these technologies are not tested outside of a standardized, laboratory environment yet and therefore their practical validity is often unknown – and should be investigated.
- We are currently investigating several detection methods individually, and as a combination. Which methods work well in what type of situations? What type of data processing steps should be taken to get there? We are still measuring at lab-level and want to bring this to an in-the-wild setting.
Keywords: data gathering, user testing, data processing, machine learning
Cooking skills and kitchen habits
Eating is often the end-point of preparing a meal. Eating healthy often starts by cooking healthy and picking your ingredients. But what if you do not have the advanced cooking skills that are needed to follow a certain recipe or prepare the ingredients correctly? What if your cooking skills hold you back from trying out new recipes? What if your perception of your eating habits differ from your actual eating habits? For instance, what if your kitchen habits are such that you always grab a bag of chips once you arrive home from work, or that you very often consume snacks while you might think this only happens occasionally during the week?
By tracking and processing data that is gathered in and around the kitchen area, we could gain better insights in the eating habits of individuals. This might be an important step in supporting the individual towards healthier behavior.
- We are currently investigating several technologies and measurement set-ups that are needed to support this type of research. What type of sensors should be placed at what locations in the house? How do they communicate together? What type of information should be gathered and could serve as a trigger for feedback towards the user? In what way can we support the user to choose different ingredients, try out new recipes or break his unwanted eating habits?
Keywords: Design of sensor systems, prototyping, data gathering, data processing, user interactions
Operationalizing behavior change strategies
Many strategies to try to support or influence us in changing our behaviors are exposed to us daily. Just think of the app that wants you to set ‘goal for the week’ (or sets it for you), or the website that informs you that ‘there is only one left!’. These features are usually based on a theorical understanding of what influences us. For example, a goal setting feature can be based on goal setting theory. However, goal setting theory argues that for goals to motivate us, they have to be feasible as well as challenging. Another theory that is used as a theorical underpinning of a feature is social comparison theory, which argues that can be motivated be comparing themselves to others (either, upward, downward, or lateral comparison). An example of how this is implemented is a leaderboard, where you can see how you are doing with respect to a certain statistic. However, is a leaderboard really a good operationalization of social comparison theory? And is an app with a textbox where you can set a goal, really a good operationalization of goal setting theory? What can we learn from the features about the theory they are based on when these features work or do not work?
- These are questions that we would like investigated, for theories and features used as an example, but also for other features and theories.
Keywords: behavior change theory, design, behavior change strategies, behavior change technology
- CHATBOTS FOR HEALTHCARE – THE eCG FAMILY CLINIC @ HMI - Enschede, NL in cooperation with UMCU - Utrecht, NL
In collaboration with Universitair Medisch Centrum Utrecht we will design and develop the eCG family clinic: the electronic Cardiovascular Genetic family clinic to facilitate genetic screening in family members. In inherited cardiovascular diseases, first-degree relatives are at 50% risk of inheriting the disease-causing mutation. For these diseases, preventive measures and treatment options are readily available and effective. Relatives may undergo predictive DNA testing to find out whether they carry the mutation. More than half of at-risk relatives do not attend genetic counselling and/or cardiac evaluation.
In order to increase the group of people that will attend the genetic counseling and/or cardiac evaluation the eCG family clinic will be developed. eCG Family Clinic is an online platform where family members are provided with general information (e.g. on the specific family disease, mode of inheritance, pros and cons of genetic testing and the testing procedure). The users of the platform will be able to interact with a chatbot.
Within this research project we have student assignments available such as:
· Designing and developing of a chatbot and its functions and roles within the platform
· Translating current treatment protocols into prototypes of the chatbot
· Evaluating user experience and user satisfactionWe are open for alternative assignments or perspectives on the example assignments above.
Contact person: Randy Klaassen, r.klaassen@utwente.nl
- Touch Interactions and Haptics @ HMI - Enschede, NL
In daily life, we use our sense of touch to interact with the world and everything in it. Yet, in Human-Computer Interaction, the sense of touch is somewhat underexposed; in particular when compared with the visual and auditory modalities. To advance the use of our sense of touch in HCI, we have defined three broad themes in which several assignments (Capita Selecta, Research Topics, Graduation Projects) can be defined.
Designing haptic interfaces
Many devices use basic vibration motors to provide feedback. While such motors are easy to work with and sufficient for certain applications, the advances in current manufacturing technologies (e.g. 3D printing) and in electronics provide opportunities for creating new forms of haptic feedback. Innovative forms of haptic feedback may even open up complete new application domains. The challenge for the students is twofold: 1.) Exploring the opportunities and limitations of (combinations of) materials, textures, and (self-made) actuators, and 2.) coming up with potential use cases.
Multimodal perception of touch
The experience of haptic feedback may not only be governed by what is sensed through the skin, but may also be influenced by other modalities; in particular by the visual modality. VR and AR technologies are prime candidates for studying touch perception, and haptic feedback is even considered ‘the holy grail’ for VR. Questions surrounding for instance body ownership in VR, or visuo-haptic illusions in VR (e.g. elongated arms, a third arm) can be interesting starting points for developing valuable multimodal experiences, and for studying the multimodal perception of touch.
Touch as a social cue
Research in psychology has shown that social touch (i.e. being touched by another person) can profoundly influence both the toucher and the recipient of a touch (e.g. decreasing stress, motivating, or showing affect). Current technologies for remote communication could potentially be enriched by adding haptic technology that allows for social touch interactions to take place over a distance. In addition, with social robots becoming more commonplace in both research and everyday life, the question arises how we should engage in social touch with such social robots in a beneficial, appropriate and safe manner. Applications of social touch technology can range from applications related to training and coaching, to entertainment, and to providing care and intimacy. Potential projects in this domain could focus on the development of new forms of social touch technology (interactions), and/or on the empirical investigation of the effects such artificial social touch interactions can have on people.
Contact: Dirk Heylen
- Wearables and tangibles assisting young adults with autism in independent living @ IDE - Enschede, NL
In this project we seek socially capable and technically smart students with interest in technology and health care to investigate how physical-digital technology may support young adults with autism (age 17-22) developing independence in daily living. In this project we build further on insights from earlier projects such as Dynamic Balance and MyDayLight.
(see more about both projects here: http://www.jellevandijk.org/embodiedempowerment/ )
Your assignment is to engage in participatory design in order to conceptualize, prototype and evaluate a new assistive product concept, together with young adults with autism, their parents, and health professionals. You can focus more on the design of concepts, the prototyping of concepts, technological work on building an adaptive flexible platform that can be personalized by each individual user, or working on developing the ‘co-design’ methods we use with young adults with autism, their parents, and the care professionals.
As a starting point we consider opportunities of wearables with bio-sensing in combination with ambient intelligent objects (internet-of-things e.g. interactive light, ambient audio) in the home.
The project forms part of a research collaboration with Karakter, a large youth psychiatric health organization and various related organizations, who will provide participating families. One goal is to present a proof-of-concept of a promising assistive device – another goal is to explore the most suitable participatory design methods in this use context. Depending on your interests you can focus more on the product or on the method. The ultimate goal of the overall research project is to realize a flexible, adaptive interactive platform that can be tailored to the needs of each individual user – this master project is a first step into that direction.
Contact: jelle.vandijk@utwente.nl
- Interactive Surfaces and Tangibles for Creative Storytelling @ HMI - Enschede, NL
In the research project coBOTnity a collection of affordable robots (called surface-bots) was developed for use in collaborative creative storytelling. Surfacebots are moving tablets embodying a virtual character. Using a moving tablet allows us to show a digital representation of the character’s facial expressions and intentions on screen while also allowing it to move around in a physical play area.
The surfacebots offer diverse student assignment opportunities in the form of Capita Selecta, HMI project, BSc Research or Design project, or MSc thesis research. These assignments can deal with technology development aspects, empirical studies evaluating the effectiveness of some existing component, or a balance of both types of work (technology development + evaluation).
Just as a sample of what could be done in these assignments, but not limited to, students could be interested in developing new AI for the surfacebot to become more intelligent and responsive in the interactive space, studying interactive storytelling with surfacebots, developing mechanisms to orchestrate multiple surfacebots as an expression means (e.g. to tell a story), evaluating strategies to make the use of surfacebots more effective, developing and evaluating an application to support users’ creativity/learning, etc.
You can find more information about the coBOTnity project at: https://www.utwente.nl/ewi/hmi/cobotnity/
Contact: Mariët Theune (m.theune@utwente.nl)
- Interpersonal engagement in human-robot relations @ HMI - Enschede, NL
Modern media technology enables people to have social interactions with the technology itself. Robots are a new form of media that people can communicate with as independent entities. Although robots are becoming naturalized in social roles involving companionship, customer service and education, little is known about the extent to which people can feel interpersonal closeness with robots and how social norms around close personal acts apply to robots. What behaviors do people feel comfortable to engage in with robots that have different types of social roles, like companion robot, customer service robot and teacher robot? Will robots that people can touch, talk to, lead and follow result in social acceptance of behaviors that express interpersonal closeness between a person and a robot? Are such behaviors intrinsically rewarding when done with a responsive robot?
Contact: Dirk Heylen
- Sports, Data, and Interaction: Interaction Technology for Digital-Physical Sports Training @ HMI - Enschede, NL
The proposed project focuses on new forms of (volleyball and other) sports training. Athletes perform training exercises in a “smart sports hall” that provides high quality video display across the surface of the playing field and has unobtrusive pressure sensors embedded in the floor, or using smart sports setups such as immersive VR with a rowing machine. A digital-physical training system offers tailored, interactive exercise activities. Exercises incorporate visual feedback from the trainer as well as feedback given by the system. They can be tailored through a combination of selection of the most fitting exercises and setting the right parameters. This allows the exercises to be adapted in real time in response to the team’s behaviour and performance, and to be selected and parameterized fitting to their levels of competition and to demands of, e.g., youth sport. To this end, expertise from the domains of embodied gaming and instruction and pedagogy in sports training are combined. Computational models are developed for the automatic management of personalization and adaptation; initial validation of such models is done by repeatedly evaluating versions of the system with athletes of various levels. We collect, and automatically analyse, data from the sensors to build continuous models of the behaviour of individual athletes as well as the team. Based on this data, the trainer or system can instantly decide to change the ongoing exercises, or provide visual feedback to the team via the displays and other modalities. In extrapolation, we foresee future development towards higher competition performance for teams, by building upon the basic principles and systems developed in this project.
Assignments in this project can be done on user studies, automatic behaviour detection from sensors, novel interactive exercise design, and any other topic.
Contact person: Dees Postma, d.b.w.postma@utwente.nl, Dennis Reidsma, d.reidsma@utwente.nl
- Dialogue and Natural Language Understanding & Generation for Social and Creative Applications @ HMI - Enschede, NL
Applications involving the processing and generation of human language have become increasingly better and more popular in recent years; think for example of automatic translation and summarization, or of the virtual assistants that are becoming a part of everyday life. However, dealing with the social and creative aspects of human language is still challenging. We can ask our virtual assistant to check the weather, set an alarm or play some music, but we cannot have a meaningful conversation with it about what we want to do with our life. We can feed systems with big data to automatically generate texts such as business reports, but generating an interesting and suspenseful novel is another story.
At HMI we are generally open to supervising different kinds of assignments in the area of dialogue and natural language understanding & generation, but we are specifically interested in research aimed at social and creative applications. Some possible assignment topics are given below.
Conversational agents and social chatbots. The interaction with most current virtual assistants and chatbots (or 'conversational agents') is limited to giving them commands and asking questions. What we want is to develop agents you can have an actual conversation with, and that are interesting to engage with. An important question here is, how can we keep the interaction interesting over a longer period of time? Assignments in this area can include question generation for dialogue (so the agent can show some interest in what you are telling them), story generation for dialogue (so the agent make a relevant contribution to the current conversation topic) and user modeling via dialogue (so the agent can get to know you). The overall goal is to create (virtual) agents that show verbal social behaviours. In the case of embodied agents, such as robots or virtual characters, we are also interested in the accompanying non-verbal social behaviours.
Affective language processing or generation. Emotions are part of everyday language, but detecting emotions in a text, or having the computer produce emotional language, are still challenging tasks. Assignments in this area include sentiment analysis in texts, for Dutch in particular, and generating emotional language in for example the context of games (emotional character dialogue or 'flavor text' as explained above) or in the context of automatically generated soccer reports.
Creative language generation. Here we can think of generating creative language such as puns, jokes, and metaphors but also stories. It is already possible to generate reports from data (for example sports or game-play data) but such reports tend to be boring and factual. How can we give them a more narrative quality with a nice flow, atmosphere, emotions and maybe even some suspense? Instead of generating non-fiction based on real-world data, another area is generating fiction. An example is generating so-called 'flavor text' for use in games. This is text that is not essential to the main game narrative, but creates a feeling of immersion for the player, such as fictional newspaper articles and headlines or fake social media messages related to the game. Another example of fiction generation is the generation of novel-length stories. Here an important challenge is how to keep the story coherent, which is a lot more difficult for long texts than for short ones.
Contact: Mariët Theune (m.theune@utwente.nl)
- Group activity detection @ HMI - Enschede, NL
Building social interaction is necessary for both mental and physical health. Participating in group activities encourage social interaction. While there is the opportunity for attending a variety of different group activities, some prefer to do solitary activities. This project aims to design an algorithm to extract the pattern of group and solitary activities from GPS (Global Positioning System) and motion sensors including accelerometer, gyroscope, and magnetometer. The extracted pattern would able us to detect whether the individual is involved in a group or solitary activity.
This project is defined within a larger project, namely the Schoolyard project. In the schoolyard project, we captured data from pupils in the school playground during the break via GPS and motion sensors. The collected data will be used to validate the designed algorithm. You need to be creative in designing the method to cover different types of group activities in the playground including parallel games (e.g., swings), ball games (e.g., football), tag games (e.g., catch and run), etc.
The research involves steps such as:
· Literature review
· Data preparation and identifying benchmark datasets
· Designing an algorithm to identify the group activity patterns
· Validate the result via ground truth/simulated data/benchmark datasets
We are looking for candidates that match the following profile:
· Having a creative mindset
· Strong programming skills in python
Many recent studies have focused on detecting group activities from videos. However, using videos to detect the activity is computationally expensive and has a high privacy concern. Below is a related paper to this topic which used the motion sensor with beacons to identify the group activity.
https://www.sciencedirect.com/science/article/pii/S0360132319303348
For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: m.baratchi@liacs.leidenuniv.nl.
You will be jointly supervised by Dr. Gwenn Englebienne, Assistant Professor, University of Twente, and with the external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.
- A Framework for Longitudinal Influence Measurement between Spatial Features and Social Networks @ HMI - Enschede, NL
The features of the environment may enhance or discourage social interactions among people. The question is how environmental features influence social participation and how the influence may vary over time. To answer this question, you need to design a framework that combines features of the spatial network with the parameters of the social network while addressing the longitudinal characteristics of such a combination.
To the best of our knowledge, no study has been conducted on analyzing the longitudinal influence between social networks and spatial features of the environment.
This project is defined within a larger project, namely the schoolyard project. In the schoolyard project, we observed the behavior of the children in the playground using RFID tags and GPS loggers. The RFIDs are used to build a social network. The longitudinal influence between the social network and spatial features may be analyzed in three stages: 1) before the renovation 2) after the renovation 3) after adaptation of the playground. The collected data can be used to validate the designed framework.
We are looking for candidates that match the following profile:
· Knowledge about network analysis
· Knowledge about multilevel time series analysis
· Strong programming skills in python
This paper presents a general framework for measuring the dynamic bidirectional influence between communication content and social networks. They used a publication database to make the social network and its relationship with the concept of connection is studied longitudinally.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.208.4144&rep=rep1&type=pdf
For information about the Schoolyard project, you can contact Mitra Baratchi, Assistant Professor, email: m.baratchi@liacs.leidenuniv.nl
You will be jointly supervised by Dr. Shenghui Wang, Assistant Professor, University of Twente, and with external supervision of Dr. Mitra Baratchi, Assistant Professor, Leiden institute of advanced science (LIACS), and Maedeh Nasri, Ph.D. Candidate of Leiden University.
- Computer vision or VR for Teleoperated Robots
Description
Teleoperated robots enable humans to be remotely present in the world to perform (maintenance) tasks or be socially present. This has many applications and benefits, as operators can apply their expertise without the need to travel to a possibly remote or dangerous environment. If there are time delays in teleoperated systems, for example because of networking issues or physical distance, they become very difficult to use. There are multiple strategies which can be employed to deal with these difficulties. These approaches require the interpretation of how humans interact with the environment, and in this field you can do your study.
Some examples of specific assignments you can do are
- Compare image segmentation (neural network) models to identify what surroundings we are dealing with and what features are visible. This involves a theoretical comparison with (optionally) a practical component in which several models can be tested and compared.
- Set up VR user studies to examine how people orient themselves and manipulate objects in VR, with the goal of transferring this knowledge to the teleoperated robotics space. This can relate to visual orientation or to object manipulation.
- Set up a study with a teleoperated robot to investigate the effects of time delays on several aspects of the interaction. This line requires both research and technical skills.
Other assignments in this field can be discussed based on your skillset.
Contact: Luc Schoot Uiterkamp l.schootuiterkamp@utwente.nl
Second supervisor: Gwenn Englebienne g.englebienne@utwente.nl
COMPANIES, EXTERNAL RESEARCH INSTITUTES, AND END USER ORGANISATIONS
Here you find some of the organisations that are willing to host master students from HMI. Keep in mind that you are not allowed to have both an external (non-research institute) internship and an external final assignment. If you work for a company that is interested in providing internships or final assignments please contact D.K.J.Heylen[at]utwente.nl
- Biobank catalogue using automated informational retrieval and AI solutions @ Amsterdam UMC, NL
Background
Cancer Center Amsterdam connects cancer researchers and care professionals within Amsterdam UMC. To facilitate translational cancer research (i.e. the validation of laboratory findings within a clinical context, for example using blood or urine samples collected from patients with cancer) a central biobanking organization has been established. This Liquid Biopsy Center (LBC) collects blood and urine samples through centralized logistics and harmonized protocols and makes these samples available to cancer researchers. For good research, it is of the greatest importance that samples are clinically annotated with relevant information on diagnosis, disease stage, and treatment interventions and outcome. This information is recorded in the hospital electronic health record (EHR), mostly in the form of free text. Consequently, clinical data management is still mostly done through manual data entering into a clinical database. Automated solutions are needed to improve this labor-intensive and inefficient way of data management.Methods
LBC supports 16 biobank projects based on related tumor type (for example lung cancer, colon cancer, hematological tumors). Over all projects, more than 4500 patients donated almost 9000 samples that are stored in the freezers for future research projects. Using data extractions of structured data from the EHR, a first setup of a comprehensive sample dashboard has been created for the lung cancer biobank using Power BI. Retrieval of specific samples through this dashboard can largely be improved if information from the EHR stored as free text – for example from radiologie or pathology reports – can be added. This project aims to improve the existing dashboard as well as to expand and tailor it to other biobanks.End product
The envisioned end product is a user dashboard that couples all available information from different databases, including free text from the EHR. The dashboard can be used to retrieve specific samples requested by cancer researchers for use in a dedicated research project. The first setup in Power BI can be used as a basis for this project. Use of other platforms can be discussed, but restrictions involving working with sensitive health data apply. This means that data cannot leave the secured hospital environment and only platforms that work within the (remote) hospital server can be used. If successful, the end product will be widely shared with researchers to set an example of improve data management solutions.Learning experience
This project offers the student to get acquainted with the academic health care and research sector along with its promises and pitfalls regarding the use of health care data for research, in this instance specifically cancer biobank research. Innovations in the use of data for improved health care planning and research are urgently needed and at the same time hampered by strict laws and regulations. This project offers the opportunity to learn from all the aspects involving medical research and at the same time work on technical solutions that will actually improve research.Contact person Human Media Interaction (UT):
Shenghui Wang, shenghui.wang@utwente.nl - Leveraging free text data written by users of a mental eHealth application @ Roessingh Research and Development – Enschede, NL
Description
Roessingh Research and Development (RRD) is an internationally recognized impact lab for personalized health technology with expertise in user-oriented health technology focused on rehabilitation, sports and health management. Together with eight international partners in Switzerland, Portugal, and The Netherlands, RRD has been developing an online service for mourning older adults that features a virtual agent. This service is called LEAVES. LEAVES has a modular structure and consists of 10 content modules about grief, including a number of therapeutic exercises to support the users’ healing process. The exercises are usually answered in free text format by the user in the application.At RRD, you have the opportunity to explore the potential of free texts written by its users for personalization and monitoring purposes. Ultimately, everyone takes a different road through the valley of grief, but nobody should walk it alone – and automatic text analysis can be a good alternative to more obtrusive questioning methods to enable personalizing an application such as LEAVES to its users.
The text data in this assignment stems from an evaluation study of LEAVES conducted between February and November 2022 in the Netherlands. The data set contains text data written by 21 users distributed over 43 exercises, with a total of ca. 30,000 words (excluding punctuation, but including stop words). All text data is written in Dutch.
We are looking for enthusiastic students with a background in computer science and/or human-computer interaction, an interest in (mental) eHealth, and experience with natural language processing (NLP) techniques, theoretically and programatically. Speaking Dutch is very desirable because all text data is in Dutch.
Exemplary assignment topics are not limited to, but could involve:
- Exploring how well a pre-trained sentiment analysis model can distinguish between different levels of suffering between users as indicated by their initial grief scores.
- Predicting user characteristics when they start using the application, such as how long ago the loss occured (based on, for example, sentiments as expressed in free text or the order in which people answered which exercise).
Contact: Lena Brandl, l.brandl@{rrd.nl, utwente.nl}
Company website: https://www.rrd.nl/en/
LEAVES project website: https://www.leaves-project.eu/Type of assignment: Graduation project (MSc, 30EC), Internship (MSc, 20EC)
References:
van Velsen, L., Cabrita, M., Op den Akker, H., Brandl, L., Isaac, J., Suárez, M., ... & Canhão, H. (2020). LEAVES (optimizing the mentaL health and resiliencE of older Adults that haVe lost thEir spouSe via blended, online therapy): Proposal for an Online Service Development and Evaluation. JMIR research protocols, 9(9), e19344.Brodbeck, J., Jacinto, S., Gouveia, A., Mendonça, N., Madörin, S., Brandl, L., ... & van Velsen, L. (2022). A Web-Based Self-help Intervention for Coping With the Loss of a Partner: Protocol for Randomized Controlled Trials in 3 Countries. JMIR research protocols, 11(11), e37827.
- Virtual Reality Safety Training @ Heijmans - Rosmalen, NL
Heijmans is a listed company that combines activities related to property development, construction & technical services and infrastructure in the fields of Living, Working and Connecting. Heijmans realizes projects for home buyers, companies and government entities and together we are building on the spatial contours of tomorrow.
Heijmans applies Extended Reality (XR) in several ways, of which Virtual Reality safety training is currently the most widely adopted application of XR. Other XR applications that we are exploring are Mixed Reality Mock-ups, Augmented Reality (tablets) on construction sites and multi-stakeholder collaboration in VR Building Information Models.
Our VR safety training modules are specifically made for Heijmans workmen and are aimed at improving safety awareness and proactive safety behavior. In these modules, workmen go through a scenario in VR, that can end in a virtual incident if one or more procedural steps regarding safety are performed incorrectly. Hereby, workmen are ‘learning by doing’ and unsafe situations at work can be simulated in a safe environment.
We are currently looking for students that can enrich our understanding on topics like, but not limited to:
- Learning effect of VR safety modules
- User experience of VR safety modules
- Multiplayer coordination / effect in VR safety training
If you are interested in exploring XR interaction in a large company with an interesting target group, please contact us. All ideas are welcome; we are sure that we can set up a great research case together!
Contact Heijmans: Thomas Smits (tsmits2@heijmans.nl) and Bert Weeda (bweeda@heijmans.nl)
Contact HMI: Dennis Reidsma (d.reidsma@utwente.nl)
Article VR at Heijmans : Veiliger werken dankzij virtual reality | Heijmans N.V.
- iTech for Understanding: Making Complex Industrial Production Processes Insightful through Interactive Technology @ Trumpf - Hengelo, NL
Trumpf is a global high-tech company which specialises in machining tools and lasers. Their software solutions pave the way towards ’Smart Factories’, facilitating the implementation of high-tech processes in industrial electronics.
With the development and evolution of their technology, the products and services that Trumpf offers become increasingly complex to understand. Indeed, with the technology becoming more advanced and complex, the challenge to communicate their unique offerings become more complex too. That is why Trumpf is looking for interns that are able to interactively communicate intricate products and production processes to an interested audience. For this internship, you will design interactive technology (potentially through the use of VR) to give Trumpf’s technology a human touch. Your work will be displayed during the TechniShow - the trade event for industrial technology.
More information about Trumpf can be found at https://www.trumpf.com/nl_NL/
A internship fee applies.
For more information please reach out to:
Dees Postma - University of Twente (d.b.w.postma@utwente.nl) - Personalization of a mental eHealth intervention for older adults who lost their spouse @ Roessingh Research and Development – Enschede, NL
Description
Roessingh Research and Development (RRD) is an internationally recognized scientific research institute with a focus on revalidation technology and eHealth, working on current and future innovations in rehabilitation and chronic care. RRD occupies a unique position between the university and healthcare practice. Together with eight international partners in Switzerland, Portugal, and The Netherlands, RRD is in the midst of developing an online service for mourning older adults that features a conversational agent and adapts its content to the user’s preferences and (clinical) needs. This service is called LEAVES.At RRD, you have the opportunity to work on the personalization of an online self-help mental eHealth application for older adults who lost their spouse. Ultimately, everyone takes a different road through the valley of grief, but nobody should walk it alone.
In a previous study, RRD brainstormed adaptation mechanisms with an international expert panel in the field of grief and eHealth, involving experts from academia and clinical practice alike. The study yielded a conceptual adaptation model. In this project, you will apply the adaptation model to the recently deployed minimum viable product (MVP) of LEAVES. The model defines four types of adaptations on an abstract level (e.g., changing the order in which the user is exposed to intervention content). Applying the model to LEAVES involves specifying how its mechanisms can be implemented in LEAVES, (iteratively) building software and/or designs prototypes (visual and/or functional) based on our current code base, and evaluating your prototypes in a qualitatively rich fashion with our target group (i.e., (mourning) older adults) and/or clinical experts with respect to user experience and/or expected clinical efficacy.
We are looking for enthusiastic students with a background in design, computer science, psychology, or human-computer interaction, an interest in (mental) eHealth, and (software, UI) prototyping skills. Speaking Dutch is not required, but desirable. In addition, for the duration of the graduation project, you will be part of the LEAVES consortium, meaning that you will join our international project meetings and gain insight into the activities of a European research project.
Contact: Lena Brandl, l.brandl@{rrd.nl, utwente.nl}
Company website: https://www.rrd.nl/en/
LEAVES project website: https://www.leaves-project.eu/References:
van Velsen, L., Cabrita, M., Op den Akker, H., Brandl, L., Isaac, J., Suárez, M., ... & Canhão, H. (2020). LEAVES (optimizing the mentaL health and resiliencE of older Adults that haVe lost thEir spouSe via blended, online therapy): Proposal for an Online Service Development and Evaluation. JMIR research protocols, 9(9), e19344. - Internships at Soundlab, for music/sound-minded creative designers and builders @ Wilmink theatre, ArtEZ conservatory Enschede and HMI, NL
Wilmink theatre and ArtEZ conservatory Enschede collaborate in the development of Soundlab Enschede: a semi-permanent workshop for sound exploration and interactive sound experience. At Soundlab, children and adults explore sound and music through existing and new acoustic and technology-enhanced or digital musical instruments, and interactive sound installations.
Two years ago we started with the development of Soundlab. During the course of four months, five music teachers-in-training and a UT student of MSc Interaction Technology collaboratively created a singing-app and a music game for the interactive projection floor at the entrance of the Designlab. Last year this was continued with seven music teachers-in-training and two UT students of MSc Interaction Technology who collaboratively created interactive sound installations.
This year we want to continue development and expand the number of available technologies visitors of Soundlab can explore. Therefore, we have two internships vacant.
As an intern
- You will help to innovate music education
- You will work in collaboration with ArtEZ students (music teachers) and supervisors of Soundlab (both ArtEZ and Wilmink Theatre)
- You will work partly at Wilmink theatre, ArtEZ Conservatory Enschede (Muziekkwartier), at the UT (e.g. DesignLab)
- You will participate in (further) developing and building of interaction technology of previous years
- You will design and build new interactive technology, including ideation, tests, and evaluation together with students of ArtEZ and children of primary schools
Partner of Soundlab Enschede is the already established Soundlab in Amsterdam (see:
https://www.muziekgebouw.nl/pQoxDSw/jeugd/soundlab).Contact
Benno Spieker (ArtEZ Conservatory and PhD-student at UT) for more info: b.p.a.spieker@utwente.nl
- Digital solutions for livestock management @ Nedap - Groenlo, NL
Nedap Livestock Management develops digital solutions for, among others, dairy farming and pig farming. They are open to internships and thesis projects; some examples of possible project topics can be found below. When you are interested, feel free to contact Robin Aly for more information.
Nedap contact: Robin Aly, robin.aly@nedap.com
HMI contact: Dennis Reidsma, d.reidsma@utwente.nlVirtual Fencing
The goal of this project is to define a product concept for virtual fencing. Cows on pastures need enough feed to graze. Farmers are faced with the challenge to manage the available land to ensure the herd constantly has sufficient feed available. Traditional approaches to this problem move physical fences to direct a herd to the later pastures. However, this process is labor intensive and slow in reacting to changing environments. Virtual fencing [1-4] has been recently defined as a means of interacting with cows based on its location using a reward and punishment system to give them incentives to move to more suitable pastures. This project will investigate solutions for a virtual fencing product.The project will start with an ideation process with farmers that will define potentially feasible cow locating solutions, ways to define a virtual fence, and ways to interact with cows to steer them. In a second step, at least one of these ideas will be extended to a high fidelity prototype and evaluate for its performance.
[1] https://www.wur.nl/en/article/Virtual-fencing-grazing-without-visible-borders.htm
[2] https://www.smartcompany.com.au/startupsmart/news/halter-29-million-agtech-dairy-cows/
[3] Anderson, D. M. (2007). Virtual fencing–past, present and future1. The Rangeland Journal, 29(1), 65-78.
[4] Campbell, D. L., Lea, J. M., Haynes, S. J., Farrer, W. J., Leigh-Lancaster, C. J., & Lee, C. (2018). Virtual fencing of cattle using an automated collar in a feed attractant trial. Applied animal behaviour science, 200, 71-77.Locating Pigs
The goal of this project is to provide means for farmers to their locate pigs. Nowadays, professional pig farms can have thousands of pigs. When kept in group housing concepts farmers are often faced with the task to locate individual pigs in these groups, for example to diagnose or treat illnesses. Locating pigs, however, is currently a cumbersome and time-consuming task as groups can be large. Currently, identifying an individual pig can only be done up close. This project requires an end-to-end development of a product concept following the design thinking process, including ideation with stake holders and creation of a prototype.Potential solutions to pig locating consist of centralized position systems, collaborative positioning systems, systems that detect crossing between demarked areas, and systems sensing the coarse area where a pig resides. These solutions are constrained by the investment that they ask from the farmer and differ in how far they satisfy his or her need for pig locating. Therefore, the projects will start with an ideation session with farmers and other stakeholders to define a suitable solution space. The ideation will be supported by low fidelity prototype that facilitate the discussion about the concepts. An important output of ideation is also the identification of key performance measures that can be used to judge the quality of a system.
Based on the output of the ideation step, at least one high fidelity prototype will have to developed and evaluated.
[1] Zhuang, S., Maselyne, J., Van Nuffel, A., Vangeyte, J., & Sonck, B. (2020). Tracking group housed sows with an ultra-wideband indoor positioning system: A feasibility study. Biosystems Engineering, 200, 176-187.
[2] Koyuncu, H., & Yang, S. H. (2010). A survey of indoor positioning and object locating systems. IJCSNS International Journal of Computer Science and Network Security, 10(5), 121-128.
[3] Fukuju, Y., Minami, M., Morikawa, H., & Aoyama, T. (2003, May). DOLPHIN: An autonomous indoor positioning system in ubiquitous computing environment. In Proceedings IEEE Workshop on Software Technologies for Future Embedded Systems. WSTFES 2003 (pp. 53-56). IEEE.
[4] Mainetti, L., Patrono, L., & Sergi, I. (2014, September). A survey on indoor positioning systems. In 2014 22nd international conference on software, telecommunications and computer networks (SoftCOM) (pp. 111-120). IEEE. - Adding Speech to Multi Agent Dialogues with a Council of Coaches @ HMI - Enschede, NL in collaboration with Roessingh Research & Development (RRD) - Enschede, NL
Context
In the EU project Council of Coaches (COUCH) we are developing a team of virtual coaches that can help older adults achieve their health goals. Each coach offers insight and advice based on their expertise. For example, the activity coach may talk about the importance of physical exercise, while the social coach may ask the user about their friends and family. Our system enables fluent multi-party interaction between multiple coaches and our users; in addition to talking directly with the user, the coaches may also have dialogues amongst themselves. Integration of full spoken interaction with the platform developed in COUCH will ake a major leap possible for our embodied agent projects.
More information: https://council-of-coaches.eu/project/overview/
Challenge
Currently in COUCH the user interacts with the coaches by selecting one of several predefined multiple-choice responses on a tablet or computer interface. Although this is a reliable way to capture input from the user, it may not be ideal for our target user group of older adults. Perhaps spoken dialogue can offer a better user experience?
In the past, researchers found that it was quite difficult to sustain dialogues that relied on automatic speech recognition (ASR) (e.g. see[1] Bickmore & Picard, 2005). However, recent commercial systems like Apple’s Siri and Amazon’s Alexa offer considerable improvements in recognising user’s speech. Such state of the art systems might now be sufficiently reliable for supporting high-quality spoken dialogues between our coaches and the user.
Assignment
In your project you will adapt the COUCH system to support spoken interactions. In addition to incorporating ASR, you will investigate smart ways to organise the dialog to facilitate adequate recognition in noisy and uncertain settings while keeping the conversation going. Finally, you will also evaluate the user experience and the quality of dialog progress in various settings and thereby the suitability of state of the art speech recognition for live, open-setting spoken conversation.
You will carry out the work with collaboration between Roessingh Research and Development (http://www.rrd.nl/) and researchers at the Human Media Interaction group of the University of Twente.
Contact
Dennis Reidsma (d.reidsma@utwente.nl)
[1] Timothy W. Bickmore and Rosalind W. Picard. 2005. Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. 12, 2 (June 2005), 293–327. DOI: https://doi.org/10.1145/1067860.1067867
- Large-scale data mining & NLP @ OCLC - Leiden, NL
OCLC is a global library cooperative that provides shared technology services, original research and community programs for its membership and the library community at large. Collectively with member libraries, OCLC maintains WorldCat, the world’s most comprehensive database of information about library collections. WorldCat now hosts more than 460 million bibliographic records in 483 languages, aggregated from 18,000 libraries in 123 countries.
As the WorldCat continues to grow in quantity, OCLC is actively exploring data science, advanced machine learning, linked data and visualisation technologies to improve data quality, transform bibliographic descriptions into actionable knowledge, as well as provide more functionality for professional cataloguers and develop more services for end users of the libraries.
OCLC is constantly looking for students who are enthusiastic to advance AI technologies for library and other cultural heritage data. Examples of student assignments are:
- Fast and scalable semantic embedding for Information Retrieval
- eXtreme
Multi-label Text Classification (XMTC) for automatic subject prediction - Automatic
image captioning for Cultural Heritage collections - Entity extraction and disambiguation
- Entity matching across different media (e.g. books, articles, cultural heritageobjects, etc) or across languages
- Hierarchical clustering of bibliographic records
- Constructing knowledge graphs around books, authors, subjects, publishers, etc.
- Interactive visualisation of library data on geographic maps and/or along a time dimension
- Concept drift (i.e., how meaning changes over time) and its effects on Information Retrieval
- Scientometrics-related topics based on co-authoring networks and/or citation networks
More details are available on request.
Contact: Shenghui Wang
Email: shenghui.wang@utwente.nl - Robotics and mechatronics @ Heemskerk Innovative Technology (Delft)
Company Information:
Heemskerk Innovative Technology provides advice and support to innovative high-tech projects in the field of robotics and mechatronics. Our mission: Convert basic research into innovative business concepts and real-world applications by creating solutions for performing actions where people themselves can not reach: making the world smaller, better integrated and in an intuitive way.
Focus areas:
Haptics
Dexterous manipulation
Master-slave control
Dynamic contact
Augmented Realityhttps://heemskerk-innovative.nl
Example assignments (to be carried out in the first half of 2021):
Current assignments focus on user robot interaction, object detection and autonomous object manipulation in real-life settings, human detection and tracking for navigation in human-populated environments as part of developing the ROSE healthcare robot. Background on C++/Python and ROS is a pre for students working on these assignments.
Contact:
Mariët Theune (EEMCS) <m.theune@utwente.nl> - Tasty Bits 'n' Bytes: Food Technology @ het Lansink - Enschede, NL
The current popularity of ICTs that offer augmented or virtual reality experiences, such as Oculus Rift, Google Glass, and Microsoft Hololens, suggests that these technologies will become increasingly more commonplace in our daily lives. With this the question arises of how these mixed reality technologies will be of benefit to us in our day-to-day activities. One such activity that could take advantage of mixed reality technologies is the consumption of food and beverages. Considering the fact that the perception of food is highly multisensory, being not only governed by taste and smell, but to a strong degree by our visual, auditory and tactile senses, mixed reality technologies could be used to enhance our dining experiences. In the Tasty Bits and Bytes project we will explore the use of mixed reality technology to digitally (using visual, auditory, olfactory, and tactile stimuli) enhance the experience of consuming food and beverages.
The setting for these challenges and projects is a mixed reality restaurant table at Het Lansink that hosts a variety of technologies to provide a novel food and beverage experience.
Assignments that can be carried out in collaboration with Het Lansink concern, for example: actuated plates; projection mapping on the table; force feedback; and multimodal taste sensations.
Website: http://www.tastybitsandbytes.com/
Contact: Dirk Heylen, Juliet Haarman
- Addiction, Coaching and Games @ Tactus - Enschede, NL
Tactus is specialized in the care and treatment of addiction. They offer help to people who suffer from problems as a result of their addiction to alcohol, drugs, medication, gambling or eating. They help by identifying addiction problems as well as preventing and breaking patterns of addiction. They also provide information and advice to parents, teachers and other groups on how to deal with addiction.
Assignment possibilities include developing game-like support and coaching apps.
Website: https://www.tactus.nl/enschede
Contact: Randy Klaassen
- Enhancing Music Therapy with Technology @ ArtEZ - Enschede, NL
ArtEZ School of Music has a strong department in Neurologic Music Therapy, which does not only train new therapists but also engages in fundamental research towards evaluating and imrpoving the impact of Music Therapy.
Music is a powerful tool for influencing people. Not only because music is nice, but also because music actually has neurological effects on the motor system. That is why music therapy is also used as revalidation instrument for people with various conditions. In this assignment you will work with professional music therapists, in developing interactive products to enrich the music therapy sessions for various purposes.
Possibliities include design, development and research assignments on systems such as home practice applications, sound spaces for embodied training, sensing to provide insights to the therapist and/or feedback to the client, etcetera.
Contact Dennis Reidsma
- Stories and Language @ Meertens Institute - Amsterdam, NL
The Meertens Institute, established in 1926, has been a research institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 1952. They study the diversity in language and culture in the Netherlands, with a focus on contemporary research into factors that play a role in determining social identities in the Dutch society. Their main fields are:
- ethnological study of the function, meaning and coherence of cultural expressions
- structural, dialectological and sociolinguistic study of language variation within Dutch in the Netherlands, with the emphasis on grammatical and onomastic variation.
Apart from research, the institute also concerns itself with documentation and providing information to third parties in the field of Dutch language and culture. We possess a large library, with numerous collections and a substantive documentation system, of which databases are a substantive part.
Assignments include text mining and classification and language technology, but also usability and interaction design.
Website of the institute: http://www.meertens.knaw.nl/cms/
Contact: Mariët Theune
- Language and Retrieval @ Elsevier - Amsterdam, NL
Elsevier is the world's biggest scientific publisher, established in 1880. Elsevier publishes over 2,500 impactful journals including Tetrahedron, Cell and The Lancet. Flagship products include ScienceDirect, Scopus and Reaxys. Increasingly, Elsevier is becoming a major scientific information provider. For specific domains, structured scientific knowledge is extracted for querying and searching from millions of Elsevier and third-party scientific publications (journals, patents and books). In this way, Elsevier is positioning itself as the leading information provider for the scientific and corporate research community.
Assignment possibilities include text mining, information retrieval, language technology, and other topics.
Contact: Mariët Theune
- Interactive Technology for Music Education @ ArtEZ - Enschede, NL
The bachelor Music in Education of ArtEZ Academy of Music in Enschede increasingly profiles itself with a focus on technology in service to music education. Students and teachers apply digital learning methods for teaching music and they experiment with all kinds of digital instruments and music apps. Applying technology in music education goes beyond the application of these tools. Interactive music systems have potential in supporting (pre-service) teachers in teaching music in primary education. Still, much research needs to be done.
Current questions include: What is an optimal medium for presenting direct feedback on the quality of rhythmic music making? What should this feedback look like?
HMI students are warmly invited to contribute to this research by creating applications concerning feedback and visualisations for rhythmic music making in primary education. Design playful, interactive musical instruments to engage children to play rhythms together. Or come up with interactive (augmented) solutions that support teachers in guiding children making music.
You work in collaboration with one of the main teachers in the bachelor Music in Education who is doing his PhD project on this topic.
Contact: Benno Spieker, Dennis Reidsma
- Using (neuro)physiological signals @ TNO -- Soesterberg, NL
At TNO Soesterberg (department of Perceptual and Cognitive Systems) we investigate how we can exploit physiological signals such as EEG brain signals, heart rate, skin conductance, pupil size and eye gaze in order to improve (human-machine) performance and evaluation. An example of a currently running project is predicting individual head rotations from EEG in order to reduce delays in streaming images in head mounted displays. Other running projects deal with whether and how different physiological measures reflect food experience. Part of the research is done for international customers.
More examples of projects as reflected in papers are on Google Scholar
We welcome students with skills in machine learning and signal processing and/or who would like to setup experiments, work with human participants and advanced measurement technology.
Contact: Jan van Erp <j.b.f.vanerp@utwente.nl>
- Social VR User Experiences @ TNO -- Den Haag, NL
In the TNO MediaLab (The Hague), we create innovative media technologies aimed at providing people with new and rich media experiences, which they can enjoy wherever, whenever and with whomever they want. To enable this, we mainly develop video streaming solutions, working from the capture side to the rendering side, looking at many of the aspects involved: coding, transport, synchronisation, orchestration, digital rights management, service delivery, etc.. In many cases, we do this work directly for customers such as broadcasters and content distributors.
As part of this, we currently work on what we call Social VR, or VR conferencing. Virtual Reality applications excel in immersing the user into another reality. But passed the wow-effect, users may feel the lack of the social interactions that would happen in real life. TNO is exploring ways to bring in the virtual world the experience of sharing moments of life with friends and family. We do this using advanced video-based solutions.
We are actively looking for students to contribute to:
- evaluating, analysing and improving the user experience of the services developed, e.g. working on user embodiment, presence, HCI-aspects, etc.
- the technical development of the platform (i.e. prototyping), e.g. working on spatial audio, 3D video, spatial orchestration, etc.
Focus of assignments can be on one aspect or the other, or a combination of both.
More info of what we do at TNO Medialab can be found here: https://tnomedialab.github.io/
Contact: Jan van Erp <j.b.f.vanerp@utwente.nl>
- AR for Movement and Health @ Holomoves - Utrecht, NL
Holomoves is a company in Utrecht that combines Hololens Augmented Reality with expertise in health and physiotherapy, to offer new interventions for rehabilitation and healthy movement in a medical setting. Students can work with them on a variety of assignments including design, user studies, and/or technology development.
More information on the company: https://holomoves.nl/
Contact person: Robby van Delden, Dennis Reidsma
- Artificial Intelligence & NLP @ Info Support - Veenendaal, NL
Info Support is a software company that makes high-end custom technology solutions for companies in the financial technology, health, energy, public transport, and agricultural technology sectors. Info Support is located in Veenendaal/Utrecht, NL with research locations in Amsterdam, Den Bosch, and Mechelen, Belgium.
Info Support has extensive experience when it comes to supervising graduation students. With assignments that do not only have added scientific value, but also impact the clients of Info Support and their clients’ clients. As a university-level graduating student, you will become part of the Research Center within Info Support. This is a group of colleagues who, on top of their job as a consultant, have a strong affinity with scientific research. The Research Center facilitates and stimulates scientific research, with the objective of staying ahead in Artificial Intelligence, Software Architecture, and Software Methodologies that most likely will affect our future.
Various research assignments in Artificial Intelligence, Machine Learning and Natural Language Processing can be carried out at Info Support.
Examples of assignments include:
- finding a way to anonymize streaming data in such a way that it will not affect the utility of AI and Machine Learning models
- improving the usability of Machine Learning model explanations to make them accessible for people without statistical knowledge
- generating new scenarios for software testing, based on requirements written in a natural language and definitions of logical steps within the application
More details are available on request.
Contact: Mariët Theune