UTFacultiesEEMCSDisciplines & departmentsPSEducationCONTINUAL REINFORCEMENT LEARNING FOR ROBOTIC NAVIGATION

CONTINUAL REINFORCEMENT LEARNING FOR ROBOTIC NAVIGATION

CONTINUAL REINFORCEMENT LEARNING FOR ROBOTIC NAVIGATION 

Introduction

Conventional reinforcement learning often struggles in dynamic settings because agents are usually trained for fixed tasks and distributions. Recent surveys describe continual reinforcement learning (CRL) as a promising direction for enabling agents to adapt to new tasks while preserving prior capabilities. This is especially relevant for robotic navigation, where environments, goals, and sensory conditions evolve over time.

 

Objectives

·  Study continual reinforcement learning for incremental navigation tasks.

·  Compare standard RL retraining against CRL approaches.

·  Evaluate retention, transfer, and sample efficiency.

Tasks

1.     Literature Review: RL, continual RL, and robotic navigation.

2.     Simulation Setup: Create a sequence of navigation tasks with increasing variation in maps, obstacles, or goals.

3.     Method Development: Implement a baseline RL agent and two or more state-of-the-art CRL methods.

4.     Evaluation: Measure success rate, forgetting, adaptation speed, and data efficiency.

5.     Optional Extension: Add safety constraints or knowledge-guided components.

Pre-requisites

Python, reinforcement learning, simulation experience is a plus.

Work

20% Theory, 60% Programming/Simulations, 20% Writing

Contact

Ali Sabzi Khoshraftar (a.sabzikhoshraftar@utwente.nl)