Hands-Free Multimodal Interaction in XR Using Eye, Head, and Foot Tracking

Problem Statement
Current XR systems primarily rely on handheld controllers for interacting with virtual objects. While effective, these controllers limit usability in scenarios where users need to keep their hands free, are multitasking, or have limited motor abilities. This reduces accessibility and restricts more natural forms of interaction.
Recent XR devices provide built-in eye and head tracking, and can be extended with foot tracking. Combining these modalities offers a promising hands-free alternative, where gaze and head orientation define interaction targets, and foot gestures trigger actions. However, designing a seamless, accurate, and intuitive integration of these inputs remains a challenge, particularly in terms of synchronization, usability, and user comfort.
Tasks
You will be responsible for:
- Designing interaction techniques combining eye and foot input
- Implementing a prototype system for object selection and manipulation
- Ensuring synchronization and calibration between input modalities
Work
30% Theory, 50% Programming, 20% Writing
Contact
Gwen Qin (gwen.qin@utwente.nl)