UTFaculteitenEEMCSDisciplines & departementenDMBNews & EventsPhD Defence Meike Nauta | Explainable AI and Interpretable Computer Vision - From oversight to Insight

PhD Defence Meike Nauta | Explainable AI and Interpretable Computer Vision - From oversight to Insight

Meike Nauta is a PhD candidate in the department Data Management & Biometrics. Supervisors are dr.ir. M. van Keulen from the faculty of Electrical Engineering, Mathematics and Computer Science and Prof.dr. C. Seifert from the University of Marburg / Hessian.AI.

The increasing availability of big data and computational power has facilitated unprecedented progress in Artificial Intelligence (AI) and Machine Learning (ML). However, complex model architectures have resulted in high-performing yet uninterpretable ‘black boxes’. This prevents users from verifying that the reasoning process aligns with expectations and intentions. This thesis posits that the sole focus on predictive performance is an unsustainable trajectory, since a model can make right predictions for the wrong reasons. The research field of Explainable AI (XAI) addresses the black-box nature of AI by generating explanations that present (aspects of) a model's behaviour in human-understandable terms. This thesis supports the transition from oversight to insight, and shows that explainability can give users more insight into every part of the machine learning pipeline: from the training data to the prediction model and the resulting explanations.

When relying on explanations for judging a model's reasoning process, it is important that the explanations are truthful, relevant and understandable. Part I of this thesis reflects upon explanation quality and identifies 12 desirable properties, including compactness, completeness and correctness.  Additionally, it provides an extensive collection of quantitative XAI evaluation methods, and analyses their availabilities in open-source toolkits. 

As alternative to common post-model explainability that reverse-engineers an already trained prediction model, Part II of this thesis presents in-model explainability for interpretable computer vision. These image classifiers learn prototypical parts, which are used in an interpretable decision tree or scoring sheet. The models are explainable by design since their reasoning depends on the extent to which an image patch “looks like” a learned part-prototype.

Part III of this thesis shows that ML can also explain characteristics of a dataset. Because of a model's ability to analyse large amounts of data in little time, extracting hidden patterns can contribute to the validation and potential discovery of domain knowledge, and allows to detect sources of bias and shortcuts early on.

Concluding, neither the prediction model nor the data nor the explanation method should be handled as a black box. The way forward? AI with a human touch: developing powerful models that learn interpretable features, and using these meaningful features in a decision process that users can understand, validate and adapt. This in-model explainability, such as the part-prototype models from Part II, opens up the opportunity to ‘re-educate’ models with our desired norms, values and reasoning. Enabling human decision-makers to detect and correct undesired model behaviour will contribute towards an effective but also reliable and responsible usage of AI.


The PhD thesis can be found here