Predicting GNSS Reliability for Autonomous Navigation Using Camera and LiDAR Cues

The trajectory of our robot in CHARISMA project. The current GNSS based localization methods fail at some outdoor environments and sometimes, this failure is not properly reported by the sensor.
Problem Statement
In autonomous navigation, reliable localization is essential for safe and efficient operation of robots and vehicles, particularly when transitioning between open outdoor, urban areas, and partially covered environments (e.g., tunnels, underpasses, parking garages). Satellite-based positioning systems (GNSS/GPS) are commonly used for outdoor localization, but their accuracy can degrade drastically in these constrained environments due to occlusions, multipath effects, and signal blockage. Currently, there is limited research on predicting GPS/GNSS reliability directly from onboard sensor data such as camera and LiDAR, and no standardized benchmark exists for evaluating models that anticipate GPS degradation before failures occur. Understanding the environmental cues that indicate impending GPS failure can enable proactive sensor fusion strategies and improve autonomous navigation robustness.
Tasks
- Literature Study
- Review literature on GNSS/GPS degradation, localization failure in autonomous systems, and multimodal sensor fusion approaches. Study machine learning methods for visual and LiDAR-based scene understanding, with an emphasis on environmental context that affects GNSS performance.
- Dataset Exploration and Preparation
- Explore the nuScenes dataset (https://www.nuscenes.org/nuscenes#overview), focusing on camera, LiDAR, and ego pose data. Design a labeling scheme to approximate GPS/GNSS reliability:
- Use map metadata (e.g., tunnels, parking areas) to identify regions where GNSS is likely to degrade.
- Simulate GPS errors or signal dropouts for these regions if necessary (GPS signals in this dataset have been postprocessed to make up for a smooth single).
- Create per-frame labels: Good, Degrading, Failing.
- Feature Engineering and Sensor Processing
- Extract visual cues from front camera images (e.g., sky visibility, brightness, occlusions). Process LiDAR point clouds to detect environmental constraints (e.g., vertical structures, narrow corridors).
- Model Development
- Implement baseline models:
- Camera-only classifier.
- LiDAR-only classifier.
- Design a multimodal fusion model combining camera and LiDAR features.
- Evaluation and Analysis
- Evaluate models using standard metrics (accuracy, confusion matrices, F1-score). Analyze feature importance: which visual or LiDAR cues are most predictive of GPS reliability. Perform ablation studies: camera vs LiDAR vs fusion. Visualize performance over trajectories, highlighting regions with degraded GPS signals.
Reporting
- Document methodology, experiments, results, and insights.
- Discuss limitations, potential improvements, and implications for autonomous navigation systems.
Work Distribution
- 40% Theory: Literature review, understanding GNSS/GPS degradation, sensor fusion methods.
- 40% Simulations & Modeling: Dataset preparation, feature extraction, model training, evaluation, and visualization.
- 20% Writing: Thesis report, figures, and discussion of results.
Contact
Le Viet Duc – Pervasive Systems, EEMCS, University of Twente
Maya Aghaei – SMART Mechatronics and Robotics Research Group, Saxion University of Applied Sciences