UTFacultiesEEMCSDisciplines & departmentsPSEducationCONTINUAL LEARNING FOR EMBODIED AGENTS - A SIMULATION

CONTINUAL LEARNING FOR EMBODIED AGENTS - A SIMULATION

CONTINUAL LEARNING FOR EMBODIED AGENTS - A SIMULATION 

Introduction

Embodied AI systems must continually adapt to changing tasks, environments, or sensor conditions. This makes them a natural testbed for continual learning. By focusing on simulation, students can study incremental perception or policy learning problems in a controlled setting before considering real robots.

Objectives

·  Develop a continual learning benchmark for an embodied agent in a simulated environment.

·  Study forgetting and adaptation across incremental scenarios.

·  Evaluate whether replay, regularization, or parameter-efficient updates improve robustness.

Tasks

1.     Literature Review: Continual learning for embodied AI and robotics.

2.     Environment Setup: Select a simulator and define a sequence of incremental tasks or environments.

3.     Baseline Models: Train a static and a naively updated baseline.

4.     Continual Learning Methods: Implement and compare two or three state-of-the-art CL approaches.

5.     Evaluation: Measure task retention, transfer, adaptation speed, sample efficiency, and other related metrics.

6.     Discussion: Identify lessons for future deployment in real embodied systems.

Pre-requisites

Python, ML/DL, interest in simulation, robotics, or embodied AI.

Work

20% Theory, 60% Programming/Simulations, 20% Writing

Contact

Ali Sabzi Khoshraftar (a.sabzikhoshraftar@utwente.nl)