If you as a bachelor or master student are looking for a final thesis assignment, capita selecta, research topic, or internship, you can choose between a large number of internal (at HMI) or external (at a company, research institute, or user organisation) assigments. You can also choose to create one yourself. Searching a suitable location or a suitable topic often starts here: below you find a number of topics you can work on.
Note that in preparation for a final MSc assignment you must carry out a Research Topics resulting in a final project proposal. For Research Topics, you need to hand in this form to the bureau of educational affairs (BOZ).
In daily life, we use our sense of touch to interact with the world and everything in it. Yet, in Human-Computer Interaction, the sense of touch is somewhat underexposed; in particular when compared with the visual and auditory modalities. To advance the use of our sense of touch in HCI, we have defined three broad themes in which several assignments (Capita Selecta, Research Topics, Graduation Projects) can be defined.
Designing haptic interfaces
Many devices use basic vibration motors to provide feedback. While such motors are easy to work with and sufficient for certain applications, the advances in current manufacturing technologies (e.g. 3D printing) and in electronics provide opportunities for creating new forms of haptic feedback. Innovative forms of haptic feedback may even open up complete new application domains. The challenge for the students is twofold: 1.) Exploring the opportunities and limitations of (combinations of) materials, textures, and (self-made) actuators, and 2.) coming up with potential use cases.
Multimodal perception of touch
The experience of haptic feedback may not only be governed by what is sensed through the skin, but may also be influenced by other modalities; in particular by the visual modality. VR and AR technologies are prime candidates for studying touch perception, and haptic feedback is even considered ‘the holy grail’ for VR. Questions surrounding for instance body ownership in VR, or visuo-haptic illusions in VR (e.g. elongated arms, a third arm) can be interesting starting points for developing valuable multimodal experiences, and for studying the multimodal perception of touch.
Touch as a social cue
Research in psychology has shown that social touch (i.e. being touched by another person) can profoundly influence both the toucher and the recipient of a touch (e.g. decreasing stress, motivating, or showing affect). Current technologies for remote communication could potentially be enriched by adding haptic technology that allows for social touch interactions to take place over a distance. In addition, with social robots becoming more commonplace in both research and everyday life, the question arises how we should engage in social touch with such social robots in a beneficial, appropriate and safe manner. Applications of social touch technology can range from applications related to training and coaching, to entertainment, and to providing care and intimacy. Potential projects in this domain could focus on the development of new forms of social touch technology (interactions), and/or on the empirical investigation of the effects such artificial social touch interactions can have on people.
Contact: Dirk Heylen
In this project we seek socially capable and technically smart students with interest in technology and health care to investigate how physical-digital technology may support young adults with autism (age 17-22) developing independence in daily living. In this project we build further on insights from earlier projects such as Dynamic Balance and MyDayLight.
(see more about both projects here: http://www.jellevandijk.org/embodiedempowerment/ )
Your assignment is to engage in participatory design in order to conceptualize, prototype and evaluate a new assistive product concept, together with young adults with autism, their parents, and health professionals. You can focus more on the design of concepts, the prototyping of concepts, technological work on building an adaptive flexible platform that can be personalized by each individual user, or working on developing the ‘co-design’ methods we use with young adults with autism, their parents, and the care professionals.
As a starting point we consider opportunities of wearables with bio-sensing in combination with ambient intelligent objects (internet-of-things e.g. interactive light, ambient audio) in the home.
The project forms part of a research collaboration with Karakter, a large youth psychiatric health organization and various related organizations, who will provide participating families. One goal is to present a proof-of-concept of a promising assistive device – another goal is to explore the most suitable participatory design methods in this use context. Depending on your interests you can focus more on the product or on the method. The ultimate goal of the overall research project is to realize a flexible, adaptive interactive platform that can be tailored to the needs of each individual user – this master project is a first step into that direction.
What You See Is How It’s Done -- Mixed VR Streaming & Recording for On-Site Support and Training
An invitation to look over the shoulder of an expert (or: having an expert look over your shoulder) is a useful and effective method for receiving assistance or tutoring. One gets to take the experts perspective while he/she is demonstrating how to solve a problem or perform a task, with the subject matter in front of both the expert and the learner. Or when roles are reversed, the expert can give feedback to the learners approach while he or she is actively engaged with the problem.
However, there are situations where this form of tutoring cannot be provided. There may not be enough experts for all learners, or the expert may not be able to be at the same location as a person requiring assistance.
One possible solution here is technology that allows one to be present at one or more remote locations at the same time: Telepresence. With our system, we look to explore the look over someone's shoulder as a design pattern for developing a new generation of such telepresence systems specifically for remote assistance and training of “hands-on” tasks. Think of bomb-defusing, archeological digging, doing repairs on an oil-rig or simply learning how to cook pancakes.
The setup we envisage, and for which a prototype is already being developed, consists of two main components. First, the “on-site kit”, consisting of a number of depth sensors and an Augmented Reality Wearable (Microsoft HoloLens). The sensors record and stream (in 3D) the environment of the user, who performs a task in that environment, to a remote viewer (or someone who reviews the recordings at a later time). The remote viewer uses the “remote kit”, consisting only of a common VR system such as the HTC Vive. With it, the recorded or streamed environment can be freely navigated. There is a voice connection between the two and there is the capability to perform (live) annotation in the environment.
With such a system, we can record and stream an expert performing a task using the on-site kit, with multiple learners looking over his or her shoulder at the same time using the remote kit. On the other hand, an expert could also use the remote kit to look over the shoulder of a novice in need of assistance with an on-site kit (think the bomb-defusal scenario).
There are several research challenges for students to work on within this context. On the one hand, there are computer vision & graphics oriented challenges to improve the performance of the streamed data and the visual quality of reconstructed scenes. On the other hand, there are questions on the user interface and experience of such a system, as well as on the evaluation given a scenario.
If you’re interested, get in contact with Dennis Reidsma <firstname.lastname@example.org>
In the research project coBOTnity a collection of affordable robots (called surface-bots) was developed for use in collaborative creative storytelling. Surfacebots are moving tablets embodying a virtual character. Using a moving tablet allows us to show a digital representation of the character’s facial expressions and intentions on screen while also allowing it to move around in a physical play area.
The surfacebots offer diverse student assignment opportunities in the form of Capita Selecta, HMI project, BSc Research or Design project, or MSc thesis research. These assignments can deal with technology development aspects, empirical studies evaluating the effectiveness of some existing component, or a balance of both types of work (technology development + evaluation).
Just as a sample of what could be done in these assignments, but not limited to, students could be interested in developing new AI for the surfacebot to become more intelligent and responsive in the interactive space, studying interactive storytelling with surfacebots, developing mechanisms to orchestrate multiple surfacebots as an expression means (e.g. to tell a story), evaluating strategies to make the use of surfacebots more effective, developing and evaluating an application to support users’ creativity/learning, etc.
You can find more information about the coBOTnity project at: https://www.utwente.nl/ewi/hmi/cobotnity/
Contact: Mariët Theune (email@example.com)
Modern media technology enables people to have social interactions with the technology itself. Robots are a new form of media that people can communicate with as independent entities. Although robots are becoming naturalized in social roles involving companionship, customer service and education, little is known about the extent to which people can feel interpersonal closeness with robots and how social norms around close personal acts apply to robots. What behaviors do people feel comfortable to engage in with robots that have different types of social roles, like companion robot, customer service robot and teacher robot? Will robots that people can touch, talk to, lead and follow result in social acceptance of behaviors that express interpersonal closeness between a person and a robot? Are such behaviors intrinsically rewarding when done with a responsive robot?
Contact: Jamy Li
Using a robot to teach autistic children social skills can be a fun way of doing so. Robots are after all novel and exciting, and give the child certain bragging rights afterwards. However, there might be some unique reasons why robots are particularly useful to autistic children that go beyond this superficial novelty.
Children on the autism spectrum are characterized with having difficulties in social communication and interaction, and show repetitive stereotyped behaviour. They may talk to Siri about the weather day in and day out, fascinated about Siri’s knowledge and ability to answer the same question over and over again. The deficits in social communication and interaction may mean that these children have difficulties understanding another’s emotions and how their behaviour affects others. Some autistic children will avoid eye-contact or are unable to communicate verbally. These are just some examples of what kind of behaviour to expect – the user group is notoriously heterogenic.
While social interaction can be very difficult for autistic children, which they often try to avoid, they do interact socially with robots. One explanation of why autistic children respond well to robots, is that robots can be very predictable in their behaviour, unlike humans. The robot can answer the same question over and over again, and always respond in the same manner. It doesn’t have all the, sometimes frightening, complexities of interacting with another human. But autistic children can interact with the robot in a similar manner as how we interact with another human being. Could it be that a robot can provide these children with a safe, more understandable, and less stressful environment for them to learn social skills? If so, how can we design an intervention with a robot where the children can learn social skills? How do we improve the robot’s technical systems so that it can respond to the special needs of autistic children?
Assignments in this area can include, but is not limited to (!):
- designing interactive games with the robot.
- interface design for therapist and/or child.
- studying why these children perceive and respond to robots differently.
- developing technical systems for the robot.
For more information, look at http://de-enigma.eu/
Contact: Bob Schadenberg
The proposed project focuses on new forms of (volleyball) sports training. Athletes perform training exercises in a “smart sports hall” that provides high quality video display across the surface of the playing field and has unobtrusive pressure sensors embedded in the floor. A digital-physical training system offers tailored, interactive exercise activities. Exercises incorporate visual feedback from the trainer as well as feedback given by the system. They can be tailored through a combination of selection of the most fitting exercises and setting the right parameters. This allows the exercises to be adapted in real time in response to the team’s behaviour and performance, and to be selected and parameterized fitting to their levels of competition and to demands of, e.g., youth sport. To this end, expertise from the domains of embodied gaming and instruction and pedagogy in sports training are combined. Computational models are developed for the automatic management of personalization and adaptation; initial validation of such models is done by repeatedly evaluating versions of the system with volleyball teams of various levels. We collect, and automatically analyse, data from the sensors to build continuous models of the behaviour of individual athletes as well as the team. Based on this data, the trainer or system can instantly decide to change the ongoing exercises, or provide visual feedback to the team via the floor display. In extrapolation, we foresee future development towards higher competition performance for teams, by building upon the basic principles and systems developed in this project.
Assignments in this project can be done on user studies, automatic behaviour detection from sensors, novel interactive exercise design, and any other topic.
Contact person: Dees Postma, firstname.lastname@example.org
Your eating behavior affects your health. We all know it by now. Not only the type of food you consume is important, but certainly also the quantity of the food, the timing of food intake throughout the day, and the speed with which a meal is eaten. The Global Burden of Disease study showed that a suboptimal diet is the second-leading risk factor for disability-adjusted life years and deaths worldwide. This number could drastically decrease when attention is paid to a balanced diet.
However, following a balanced diet is not without challenges: Choosing balanced ingredients and meal sizes is a difficult task in itself. Moreover, we are living in a social society were food is often consumed together with others; where food is available at every street corner; and where food is intertwined with celebrations or events. Making healthy diet choices therefore requires more than knowledge about choosing the right ingredients; it also requires motivation to make lifestyle changes and support of your social group (family, friends, colleagues) in order to maintain these lifestyle changes.
The broadness of the healthy eating theme is reflected in the amount of the assignments that can be formulated around this theme. Below are examples of assignments that you can think of, but many more are possible. Assignments can both be formulated at BSc and MSc level.
Gaining insight into the current eating behavior of a subject often is a first step in accomplishing better health. Professionals still use conventional methods, such as the use of logbooks, for this. They ask the user to manually report on their eating behavior throughout the day. Memory and logging bias are not uncommon factors associated with this method. Often, users simply forget to write down what and when they have been eating. Also, the presence of unknown ingredients in the food, difficulties in estimating portion size or social discomfort while logging the food affect the reliability of this method.
A different approach to lower the logging bias is to use technology that is less tedious in use, compared to the conventional manual logging method. Designing new interventions that engage the user in their reporting task and increase compliance might be a suitable approach for this. Combining different sensors or devices (placed on the body, in a room or in cooking gear) might be one way to do it, and adding a gaming element to the intervention might be another.
Other elements to consider adjusting are the context of the communication: who (what is the relationship with the sender) - says what (what is the content of the message) - in which channel (how is this message communicated)- to whom (who is the receiver of the message) – when (what is the timing of the message) – where (what is the location of the receiver) – how (what is the style of the message) - and with what effect (how does the message come across)?
Lastly, interventions can also be designed to try to influence eating behavior directly: behavior change interventions. Think of interventions and interactive technology that directly inform the user of their unhealthy diet choices (feedback on shopping list), or that indirectly influence the user (smaller portions). To increase engagement and compliance with the interventions, similar approaches can also be taken as explained above, such as adding gaming elements or changing the context of the communication.
Assignments can focus on one area in particular (technology development, behavior change interventions, etc), or on a combination of domains. Assignments can be made suitable to your specific backgrounds and interests. Please feel free to contact us to hear about the possibilities:
Usability guidelines have been developed for agricultural robot sprayers (Adamides et al., 2017) but not for other advanced robots such as weed handling robots, high-wire cultivation robots or data collection robots. Designing usability guidelines for these types of systems is pertinent with the growing importance of such systems (cf. Shamshiri et al., 2018). Past usability studies on agricultural robots have focused on evaluating different types of hardware rather than data presentation, reference data and how the system fits into the overall farming practice, the latter being important issues for designing new technology interfaces for farmers (Monferrer and Bonyuet, 2002; Nurkka et al., 2007). Intuitive and trustworthy design is particularly important for more advanced agriculture systems. In addition, the psychological wellbeing of farmers in the Netherlands is an area of attention (Schouen, 2018). Linking the data interfaces of high-tech systems to mass media outlets, such as social media has the potential to increase farmer’s welfare and wellbeing by creating greater connectedness to peers and the larger community (Ahn and Shin, 2013).
The overall goal of this research line is to provide design principles for multifunctional data interfaces in agriculture work, which should allow the farmer to control media messages in an intuitive way and needs to attenuate the mixed feelings about using social media to communicate about food risks (Rutsaert et al., 2014). This may include identifying current practices, creating new interaction prototypes for agri-tech that integrate social media and robotics/wearable/ambient technology for farming communities.
Currently, this project does not have access to a participant pool of farmers. Therefore, the project possibilities could be a literature review of social media technologies that use peripherals and that have been customized to other user groups to obtain insight about how to customize social media technologies to a particular user group, literature and other analyses of agriculture workers to gain insights that could be applied to design of systems for them, and to run a directed design session and/or user study with non-farmers to indicate what they would be interested to learn from farmers through social media.
Contact person: Jamy Li
Nurkka, P., Norros, L., & Pesonen, L. (2007). Improving usability and user acceptance of information systems in farming. In EFITA/WCCA Joint Congress in IT in Agriculture.
Adamides, G., Katsanos, C., Parmet, Y., Christou, G., Xenos, M., Hadzilacos, T., & Edan, Y. (2017). HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer. Applied ergonomics, 62, 237-246.
Shamshiri R R, Weltzien C, Hameed I A, Yule I J, Grift T E, Balasundram S K, et al. (2018) Research and development in agricultural robotics: A perspective of digital farming. Int J Agric & Biol Eng, 11(4): 1-14.
Monferrer, A., & Bonyuet, D. (2002). Cooperative robot teleoperation through virtual reality interfaces. In Proceedings Sixth International Conference on Information Visualisation (pp. 243-248). IEEE.
Rutsaert, P., Pieniak, Z., Regan, Á., McConnon, Á., Kuttschreuter, M., Lores, M., ... & Verbeke, W. (2014). Social media as a useful tool in food risk and benefit communication? A strategic orientation approach. Food Policy, 46, 84-93.
Schouen, C. J. (2018). Terugkoppeling van de bijeenkomst inzake hulpverlening aan agrarische ondernemers in crisissituaties. Tweede Kamer der Staten-Generaal (Landbouw, Natuur en Voedselkwaliteit) Brief regering 30252-24. Available: https://www.tweedekamer.nl/kamerstukken/brieven_regering/detail?id=2018Z21383&did=2018D55086
Applications involving the processing and generation of human language have become increasingly better and more popular in recent years; think for example of automatic translation and summarization, or of the virtual assistants that are becoming a part of everyday life. However, dealing with the social and creative aspects of human language is still challenging. We can ask our virtual assistant to check the weather, set an alarm or play some music, but we cannot have a meaningful conversation with it about what we want to do with our life. We can feed systems with big data to automatically generate texts such as business reports, but generating an interesting and suspenseful novel is another story.
At HMI we are generally open to supervising different kinds of assignments in the area of dialogue and natural language understanding & generation, but we are specifically interested in research aimed at social and creative applications. Some possible assignment topics are given below.
Conversational agents and social chatbots. The interaction with most current virtual assistants and chatbots (or 'conversational agents') is limited to giving them commands and asking questions. What we want is to develop agents you can have an actual conversation with, and that are interesting to engage with. An important question here is, how can we keep the interaction interesting over a longer period of time? Assignments in this area can include question generation for dialogue (so the agent can show some interest in what you are telling them), story generation for dialogue (so the agent make a relevant contribution to the current conversation topic) and user modeling via dialogue (so the agent can get to know you). The overall goal is to create (virtual) agents that show verbal social behaviours. In the case of embodied agents, such as robots or virtual characters, we are also interested in the accompanying non-verbal social behaviours.
Affective language processing or generation. Emotions are part of everyday language, but detecting emotions in a text, or having the computer produce emotional language, are still challenging tasks. Assignments in this area include sentiment analysis in texts, for Dutch in particular, and generating emotional language in for example the context of games (emotional character dialogue or 'flavor text' as explained above) or in the context of automatically generated soccer reports.
Creative language generation. Here we can think of generating creative language such as puns, jokes, and metaphors but also stories. It is already possible to generate reports from data (for example sports or game-play data) but such reports tend to be boring and factual. How can we give them a more narrative quality with a nice flow, atmosphere, emotions and maybe even some suspense? Instead of generating non-fiction based on real-world data, another area is generating fiction. An example is generating so-called 'flavor text' for use in games. This is text that is not essential to the main game narrative, but creates a feeling of immersion for the player, such as fictional newspaper articles and headlines or fake social media messages related to the game. Another example of fiction generation is the generation of novel-length stories. Here an important challenge is how to keep the story coherent, which is a lot more difficult for long texts than for short ones.
Contact: Mariët Theune (email@example.com)
Here you find some of the organisations that are willing to host master students from HMI. Keep in mind that you are not allowed to have both an external (non-research institute) internship and an external final assignment. If you work for a company that is interested in providing internships or final assignments please contact D.K.J.Heylen[at]utwente.nl
The current popularity of ICTs that offer augmented or virtual reality experiences, such as Oculus Rift, Google Glass, and Microsoft Hololens, suggests that these technologies will become increasingly more commonplace in our daily lives. With this the question arises of how these mixed reality technologies will be of benefit to us in our day-to-day activities. One such activity that could take advantage of mixed reality technologies is the consumption of food and beverages. Considering the fact that the perception of food is highly multisensory, being not only governed by taste and smell, but to a strong degree by our visual, auditory and tactile senses, mixed reality technologies could be used to enhance our dining experiences. In the Tasty Bits and Bytes project we will explore the use of mixed reality technology to digitally (using visual, auditory, olfactory, and tactile stimuli) enhance the experience of consuming food and beverages.
The setting for these challenges and projects is a mixed reality restaurant table at Het Lansink that hosts a variety of technologies to provide a novel food and beverage experience.
Assignments that can be carried out in collaboration with Het Lansink concern, for example: actuated plates; projection mapping on the table; force feedback; and multimodal taste sensations.
Contact: Dirk Heylen, Juliet Haarman
Tactus is specialized in the care and treatment of addiction. They offer help to people who suffer from problems as a result of their addiction to alcohol, drugs, medication, gambling or eating. They help by identifying addiction problems as well as preventing and breaking patterns of addiction. They also provide information and advice to parents, teachers and other groups on how to deal with addiction.
Assignment possibilities include developing game-like support and coaching apps.
Contact: Randy Klaassen
100%FAT creatively merges the latest technological innovations to create new applications and experiences. The result is a unique mix of sensors and actuators that transform interactivity into a sensory experience. They transform corporate, educational and/or cultural messages in eye-catching, innovative presentations and installations. They have expertise in the design and realization of interactive media concepts including augmented- and virtual reality, projection mapping, positional audio, computer vision, motion graphics, 3D art, light, 2D animation and video.
Assignments concern outdoor play, interactive museum experiences, and virtual and augmented reality.
Contact: Dennis Reidsma
YALP (a daughter company of Lappset) is world leader in interactive outdoor play. Over 400 of their interactive systems (Sona dance arch, Sutu soccer wall, Fono DJ booth, Toro sports court and Memo activity zone) are placed in over 20 countries.
YALP has its own R&D department with three academically schooled designers. They collaborated in multiple research projects with several institutes, including research on using playgrounds with the elderly, measuring movement intensity on a Sona, and testing sports equipment (TNO, 2008-2009); field research on a Memo playground in field lab setting (TU Delft / Profit, since 2014); research on play intensity on a Toro at the Marc Lammers Plaza (Hogeschool InHolland, 2012); and research on the Fono DJ booth (Hogeschool Saxion, 2014). With these projects, they also created opportunities for student projects, several of whom ended up in the creative industry.
Assignments include topics such as the design and implementation of new game concepts, systematic evaluation of play, and exploration of smartphone enhanced outdoor play.
Contact: Dennis Reidsma, Robby van Delden
ArtEZ School of Music has a strong department in Neurologic Music Therapy, which does not only train new therapists but also engages in fundamental research towards evaluating and imrpoving the impact of Music Therapy.
Music is a powerful tool for influencing people. Not only because music is nice, but also because music actually has neurological effects on the motor system. That is why music therapy is also used as revalidation instrument for people with various conditions. In this assignment you will work with professional music therapists, in developing interactive products to enrich the music therapy sessions for various purposes.
Possibliities include design, development and research assignments on systems such as home practice applications, sound spaces for embodied training, sensing to provide insights to the therapist and/or feedback to the client, etcetera.
Contact Dennis Reidsma
The Meertens Institute, established in 1926, has been a research institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 1952. They study the diversity in language and culture in the Netherlands, with a focus on contemporary research into factors that play a role in determining social identities in the Dutch society. Their main fields are:
- ethnological study of the function, meaning and coherence of cultural expressions
- structural, dialectological and sociolinguistic study of language variation within Dutch in the Netherlands, with the emphasis on grammatical and onomastic variation.
Apart from research, the institute also concerns itself with documentation and providing information to third parties in the field of Dutch language and culture. We possess a large library, with numerous collections and a substantive documentation system, of which databases are a substantive part.
Assignments include text mining and classification and language technology, but also usability and interaction design.
Website of the institute: http://www.meertens.knaw.nl/cms/
Contact: Mariët Theune
Elsevier is the world's biggest scientific publisher, established in 1880. Elsevier publishes over 2,500 impactful journals including Tetrahedron, Cell and The Lancet. Flagship products include ScienceDirect, Scopus and Reaxys. Increasingly, Elsevier is becoming a major scientific information provider. For specific domains, structured scientific knowledge is extracted for querying and searching from millions of Elsevier and third-party scientific publications (journals, patents and books). In this way, Elsevier is positioning itself as the leading information provider for the scientific and corporate research community.
Assignment possibilities include text mining, information retrieval, language technology, and other topics.
Contact: Mariët Theune
The bachelor Music in Education of ArtEZ Academy of Music in Enschede increasingly profiles itself with a focus on technology in service to music education. Students and teachers apply digital learning methods for teaching music and they experiment with all kinds of digital instruments and music apps. Applying technology in music education goes beyond the application of these tools. Interactive music systems have potential in supporting (pre-service) teachers in teaching music in primary education. Still, much research needs to be done.
Current questions include: What is an optimal medium for presenting direct feedback on the quality of rhythmic music making? What should this feedback look like?
HMI students are warmly invited to contribute to this research by creating applications concerning feedback and visualisations for rhythmic music making in primary education. Design playful, interactive musical instruments to engage children to play rhythms together. Or come up with interactive (augmented) solutions that support teachers in guiding children making music.
You work in collaboration with one of the main teachers in the bachelor Music in Education who is doing his PhD project on this topic.
Contact: Benno Spieker, Dennis Reidsma
At TNO Soesterberg (department of Perceptual and Cognitive Systems) we investigate how we can exploit physiological signals such as EEG brain signals, heart rate, skin conductance, pupil size and eye gaze in order to improve (human-machine) performance and evaluation. An example of a currently running project is predicting individual head rotations from EEG in order to reduce delays in streaming images in head mounted displays. Other running projects deal with whether and how different physiological measures reflect food experience. Part of the research is done for international customers.
More examples of projects as reflected in papers are on Google Scholar
We welcome students with skills in machine learning and signal processing and/or who would like to setup experiments, work with human participants and advanced measurement technology.
Contact: Jan van Erp <firstname.lastname@example.org>
In the TNO MediaLab (The Hague), we create innovative media technologies aimed at providing people with new and rich media experiences, which they can enjoy wherever, whenever and with whomever they want. To enable this, we mainly develop video streaming solutions, working from the capture side to the rendering side, looking at many of the aspects involved: coding, transport, synchronisation, orchestration, digital rights management, service delivery, etc.. In many cases, we do this work directly for customers such as broadcasters and content distributors.
As part of this, we currently work on what we call Social VR, or VR conferencing. Virtual Reality applications excel in immersing the user into another reality. But passed the wow-effect, users may feel the lack of the social interactions that would happen in real life. TNO is exploring ways to bring in the virtual world the experience of sharing moments of life with friends and family. We do this using advanced video-based solutions.
We are actively looking for students to contribute to:
- evaluating, analysing and improving the user experience of the services developed, e.g. working on user embodiment, presence, HCI-aspects, etc.
- the technical development of the platform (i.e. prototyping), e.g. working on spatial audio, 3D video, spatial orchestration, etc.
Focus of assignments can be on one aspect or the other, or a combination of both.
More info of what we do at TNO Medialab can be found here: https://tnomedialab.github.io/
Contact: Jan van Erp <email@example.com>
Philips Eindhoven deals with the subject of “Connected Care”. Connected care is about allowing medical professionals to maintain an effective and efficient connection to their patients. Providing clear information about the patient is key in this process.
At this R&D group of Philips there is a keen interest in applying VR or AR in such a health care setting. On the longer run this might be combined with several of Philip’s existing and future appliances. Being part of the world-wide research and development department of Philips their interest is on the exploration of possibilities of AR&VR for settings such as physical rehabilitation and MRI & CT visualisation.
Your assignment is to research and eventually design a stimulating or informative VR application, starting of course by making yourself familiar with the current state of the art and then specifying a better framed project. It has to include a well-designed experiment to investigate fitting research questions.
In addition to conducting research, you will be expected to design a virtual evironment with Unity. Therefore prior knowledge and skills of Unity and/or modelling is essential. Help can be provided from the host institute and tutors for the VR related specifics and (serious) game design mechanics.
Contact: Robby van Delden <firstname.lastname@example.org>
Because of the ever-increasing number of documents that information systems deal with, automatic subject indexing, i.e., identifying and describing the subject(s) of documents to increase their findability, is one of the most desirable features for many such systems. Subject index terms are normally taken from knowledge organization systems (e.g., thesauri, subject headings systems) and classification systems (e.g., dewey decimal classification) which easily contain thousands or tens of thousands terms or codes, making the problem a massively multi-class classification problem. Automatically assigning the correct labels to a text is therefore very challenging.
The goal of this project is to explore deep learning methods to train a classifier which can assign a document a small subset of relevant subject labels from tens of thousands target labels . Different from traditional binary or multi-class classification problems, this problem of extreme multi-label text classification does not assume that the target labels are independent or mutually exclusive. The training process suffers from data sparsity, i.e., there is a long tail of labels with small number of positive training instances. Scalability is another challenge too.
The dataset used in this project is a subset of WorldCat which contains the metadata (title, authors, abstract, etc.) of 27 million Medline articles, mostly indexed with the Medical Subject Headings (MeSH) which contains over 90,000 entry terms . The student is expected to apply or adapt the state-of-the-art methods to automatically assign MeSH terms to Medline articles, as well as design evaluation experiments to measure the performance of different methods.
 Liu, Jingzhou, Wei-Cheng Chang, Yuexin Wu and Yiming Yang. “Deep Learning for Extreme Multi-label Text Classification.” SIGIR (2017).
Contact: Gwenn Englebienne
Vierstroom is a care organisation offering care at people's homes (to help people live at home, independently, for a longer time) and in a care home. As part of that, Vierstroom also wants to explore how novel technology such as social robots, but also other kinds of technology, can improve the services they offer to their client.
As part of this they are looking for creative and independent thesis students who can help them explore novel interventions with social robots for elderly people, for example in the setting of a "buurthuis". For this they have at least the availability of an iPal robot, and possibly other robot platforms. Assignments will include the design, implementation, and evaluation of the intervention.
Contact person: Dennis Reidsma
Holomoves is a company in Utrecht that combines Hololens Augmented Reality with expertise in health and physiotherapy, to offer new interventions for rehabilitation and healthy movement in a medical setting. Students can work with them on a variety of assignments including design, user studies, and/or technology development.
More information on the company: https://holomoves.nl/
Contact person: Robby van Delden, Dennis Reidsma
Info Support is a software company that makes high-end custom technology solutions for companies in the financial technology, health, energy, public transport, and agricultural technology sectors. Info Support is located in Veenendaal/Utrecht, NL with research locations in Amsterdam, Den Bosch, and Mechelen, Belgium.
Info Support has extensive experience when it comes to supervising graduation students. With assignments that do not only have added scientific value, but also impact the clients of Info Support and their clients’ clients. As a university-level graduating student, you will become part of the Research Center within Info Support. This is a group of colleagues who, on top of their job as a consultant, have a strong affinity with scientific research. The Research Center facilitates and stimulates scientific research, with the objective of staying ahead in Artificial Intelligence, Software Architecture, and Software Methodologies that most likely will affect our future.
Various research assignments in Artificial Intelligence, Machine Learning and Natural Language Processing can be carried out at Info Support.
Examples of assignments include:
- finding a way to anonymize streaming data in such a way that it will not affect the utility of AI and Machine Learning models
- improving the usability of Machine Learning model explanations to make them accessible for people without statistical knowledge
- generating new scenarios for software testing, based on requirements written in a natural language and definitions of logical steps within the application
More details are available on request.
Contact: Mariët Theune
New technology makes a difference in medicine. Not just by transforming the education of professionals and by inventing new ways to treat patients, but also by changing the communication between doctor and patient. Patients know more than ever thanks to "Doctor Google"; the ways of communication by doctors nowadays already changes to account for such developments. At the same time, it remains challenging to explain to patients the intricacies of their condition and their treatment. In this assignment, you will explore how novel technology such as 3D printing or augmented reality can be used to transform the nature of doctor-patient conversations in the context of surgery.
This assignment can be done as thesis or as internship, and will carried out in the context of the 3D print lab of the regional hospital MST.
Contact person: Dennis Reidsma