[M] Explainable Machine Learning

MASTER Assignment

Explainable Machine Learning

Type : Internal Master CS

Duration : TBD

Student: Unassigned

If you are interested please contact :

Background:

Machine learning becomes increasingly ubiquitous, though often invisible, in everyday life. Understandability of such models has been described as a desired features in and as one of the key challenges in data mining in early 2000, (e.g. [4]). More recently, the need for a scientific approach to interpretable machine learning was identified [2] and the EU General Data Protection regulation 20166/679 includes a "right to explanation" [1]. For general public users, enhanced comprehension would allow to built trust in the model (e.g. [6]) and therefore lead to higher usage and turnover rates. Also, model comprehension would counteract applications of biased and unfair models, especially in domains where humans are subjects of predictions [3] and lead to the development of models that are robust against attacks [5]. 

The assignment

Within the frame of explainable machine learning there many interesting subtopics and the research field is quite active at this time. Please contact us for concrete, current topics or with your own idea.

About you

You are a graduate student with an inquisitive mind (why should one take anything for granted, after all..) and with a passion for programming. Failing only means, you have to try another path, and this is quite exciting. You get to work in a rapidly evolving research field with high societal  impact. And with nice people, too.

References

[1] The European Commission. “Regulation (EU) 2016/679”. In: (2016).

[2] F. Doshi-Velez and B. Kim. “Towards A Rigorous Science of Interpretable Machine Learning”. In: ArXiv e-prints (Feb. 2017). arXiv: 1702.08608 [stat.ML].

[3] Julia Dressel and Hany Farid. “The accuracy, fairness, and limits of predicting recidivism”. In: Science Advances 4.1 (2018), eaao5580.

[4] Ron Kohavi. Data Mining and Visualization. Invited talk at the National Academy of Engineering US Frontiers of Engineers (NAE). Sept. 2000.

[5] Nicolas Papernot et al. “Practical Black-Box Attacks Against Machine Learning”. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ASIA CCS ’17. Abu Dhabi, United Arab Emirates: ACM, 2017, pp. 506–519.

[6] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “"Why Should I Trust You?": Explaining thePredictions of Any Classifier”. In: Proc. Intl. Conference on Knowledge Discovery and Data Mining. KDD. San Francisco, California, USA: ACM, 2016, pp. 1135–1144.