Investigating Gaze-Contingent LLM Systems for Context-Aware Assistance in XR
Problem Statement:
In XR environments, retrieving relevant information quickly and intuitively is crucial—especially when users are multitasking or operating hands-free. Traditional interaction methods like voice or manual selection can be slow, cognitively taxing, or impractical. A promising alternative is gaze-contingent interaction, where a system intelligently reacts to where the user is looking. However, current solutions are limited in adaptiveness and rely on predefined responses. This project explores how integrating eye-tracking data with LLMs (Large Language Models) can enable seamless, context-aware assistance based on gaze behavior.
Task:
You will explore the design and implementation of an AI-enhanced gaze interaction system within XR. This includes developing prototypes, experimenting with gaze data integration, and evaluating the effectiveness of the system in supporting complex tasks. Whether you're interested in interaction design, machine learning, programming, or cognitive UX research—this project offers a valuable opportunity to contribute to cutting-edge XR research.
Research Scope:
1. Development of a Gaze-Contingent AI System:
- Use eye-tracking data from XR headsets.
- Integrate gaze information with LLMs (e.g., GPT) to dynamically provide contextual feedback based on user attention.
2. User Interaction Design:
- Create intuitive interaction paradigms that minimize explicit input.
- Shift from manual or voice triggers to implicit, gaze-driven activations.
3. Evaluation of User Performance:
- Conduct user studies to measure how this system affects task efficiency, cognitive load, and satisfaction.
- Compared with traditional input methods (e.g., voice, touch, controllers).
Work:
- Theory: 20%
- Programming: 60%
- Writing: 20%
Contact:
Gwen Qin (gwen.qin@utwente.nl)