Learning-Based Reliability Estimation for Position Measurements in Sensor
Problem Statement
Accurate localization is a key requirement for autonomous robots, and a common way to achieve it is to fuse information from multiple sensors. Robots often rely on sources such as GNSS/GPS, wheel odometry, visual odometry, LiDAR odometry, and inertial measurements (IMU). Each source performs well only under certain conditions and can degrade or fail in others. Sensor fusion is therefore used to combine their strengths and reduce the impact of individual failure modes.
Sensors' data in robot localization can be fused either in a loosely coupled or tightly coupled fashion. In the tightly coupled approach, data gathered from different sensors is directly fed into a fusion pipeline, which outputs the position estimation. In contrast, in loosely coupled approaches, independent modules are needed to convert raw sensor data into estimated position, and then these estimates will be fused. The tight formulation is often more accurate, while the loose formulation is more modular and easier to extend in terms of sensors' modalities.
For loosely coupled fusion approaches, it is not enough to receive pose updates. The fusion algorithm also needs a real-time reliability (or uncertainty) estimate for each source at every time step. In practice, such metrics are often missing, incomplete, or misleading. For example, GNSS receivers provide Horizontal or Vertical Dilution of Precision indicators (HDOP and VDOP), but these do not fully capture important error sources such as multipath. Some visual and LiDAR-based odometry methods that use optimization on a time-window of measurements can not output the covariance of the estimated pose in real time, since it requires a time-consuming inversion of a large Hessian matrix. Wheel odometry is similarly challenging, where encoder-based motion can drift due to slip, uneven surfaces, or actuator effects, while many systems do not expose a quantitative confidence measure for that estimate.
These limitations motivate a more general reliability estimation approach. A method that takes a pose estimate (and optionally lightweight indicators or raw sensor data such as images or point clouds) and outputs a reliability score that reflects how much the estimate should be trusted. Such a score can be used by downstream fusion methods (like Kalman filter-based fusion) to adapt measurement weighting, reject outliers, and detect failure cases. Beyond improving fusion quality, it can also support recovery behaviors when localization becomes unreliable.
At the SMART research group, we have implemented a range of localization approaches for autonomous robots operating in diverse environments. Since our work is primarily applied and carried out in close collaboration with industrial partners, we focus not only on exploring state-of-the-art methods, but also on deploying them robustly in real-world conditions. Through this process, we have encountered several practical reliability challenges in robot localization, one of which is the focus of this assignment.

Fig. 1. -Various robots are used within our projects, for which we study reliable onboard localization.
One example of localization degeneracy and the expected reliability report in our previous projects is presented in Fig. 2, which shows the GNSS-based localization for an autonomous legged robot intended for monitoring gas pipelines. In this example, the robot has paved a trajectory that briefly passes under a tree, and that brief obstruction of satellite signals has caused inaccurate position estimation. Here, the reliability of the signal is correctly reported and visualized by purple ellipses, such that when the robot is under the tree canopy and position measurement is less accurate, the ellipse is larger, and the data is less reliable. Despite this successful example of reliability reporting by GPS, experience shows this is not the case all the time, and for some other odometry sources, such a reliability measure is not even available.

Fig. 2. - GNSS-based localization for gas leak detector robot (CHARISMA project).
Another such project was the WareDrone project, where we aimed at developing an autonomous aerial robot for the task of monitoring in warehouses. The current assignment is part of a follow-up project, named AgainstGPS, which aims to enable GPS-denied localization for small drones operating inside warehouses. Our primary localization source is Visual-Inertial Odometry (VIO). While VIO can perform well in many indoor settings, it can also produce incorrect pose updates during difficult conditions (low texture, motion blur, rapid rotations, or lighting changes) without clearly reporting a corresponding increase in uncertainty. Developing a learned reliability metric for pose estimates is, therefore, a promising step toward more robust localization in warehouse environments and should be applicable for any other type of robot.

Fig. 3. - Initial tests of a drone flight relying on vision-based localization in the WareDrone project.
Research Questions
The main research question is “How can we train a model that evaluates the reliability of a generic position estimate relying on raw sensor information or environment observation?” This can be broken down to the following sub-questions:
· What approaches exist in the academic literature to estimating the reliability of localization
· methods? What are the conventional approaches?
· What machine learning approaches and what model structures can solve the reliability
· estimation problem?
· Is it necessary to use deep learning models to tackle the problem?
· How to train such a model and how to obtain the training dataset for this task?
· How advantageous can the deep learning method be compared to conventional reliability
· estimations (for instance, covariance estimation for optimization-based VIO or HDOP/VDOPfor GPS)?
Qalifications
We expect a master’s student with the following technical skills to participate in our project:
· Machine Learning and Deep Learning
· SLAM
· Python and C++ programming languages
· Robot Operating System (ROS)
Contact
Le Viet Duc – Pervasive Systems, EEMCS, University of Twente
Hojat Mirtajadini, SMART Mechatronics and Robotics Research Group, Saxion University of Applied Sciences