Evaluating and Improving Multimodal Fusion Robustness in Drone-Based Object Detection

Problem statement:
Autonomous drones are increasingly deployed in critical areas like infrastructure inspection, safety and rescue, and autonomous delivery. The reliability of their perception systems is very important for safe and effective operation. These systems rely on multimodal object detection, fusing data from various sensors like RGB cameras, LiDAR, and thermal cameras to build a comprehensive understanding of their environment. However, in real-world scenarios, drone sensors are susceptible to degradation from environmental factors (e.g., rain, fog, dust), hardware malfunctions, or even malicious attacks. This degradation can introduce noise, blur, or a complete loss of a sensor modality, potentially leading to catastrophic failures in object detection and, consequently, mission failure or safety hazards.
Task:
The goal of this project is to investigate, analyze, and enhance the robustness of multimodal object detection systems for drones operating under sensor degradation conditions. This involves systematically evaluating the performance of state-of-the-art fusion models when individual or multiple sensor inputs are corrupted. The core research questions to be addressed are:
· How does the performance of a multimodal object detection model degrade under different types of sensor corruption (e.g., noise, blur, missing modality)?
· Which modality contributes most to resilience under specific degradation types?
· Can certain fusion mechanisms compensate better for particular sensor failures?
Work:
· 20% Theory: Research on state-of-the-art multimodal object detection architectures and information fusion techniques.
· 60% Simulation: Implement and evaluate models against simulated sensor degradation scenarios to test robustness of fusion techniques.
· 20% Writing: Document the findings and provide recommendations for designing robust drone perception systems.
Contact:
Adarsh Nanjaiya Latha (a.nanjaiyalatha@utwente.nl)