Date: 11September 2019
Time: 12:45 - 13:30 (Lunch available from 12:35)
Room: RA 1501 (Ravelijn)
Speaker: Dirk Kroese (University of Queensland, Brisbane, Australia)
Authors: Dirk Kroese (*), Zdravko Botev, Thomas Taimre, Slava Vaisman
The recent interest in artificial intelligence, automation, algorithms, and data analytics has brought many new researchers to the area of data science and machine learning. To someone starting to learn these topics, the multitude of computational techniques and mathematical ideas might seem overwhelming, and many novices are satisfied with only understanding how to use off-the-shelf software. But what if the assumptions of the black-box recipe are violated? Can we still trust the results?
Three years ago, we embarked on a journey to write a linear and self-contained story on the rich variety of mathematical ideas from linear algebra, functional analysis, multivariate differentiation, optimisation, probability, and statistics that underpin the algorithms in data science and machine learning.
In this talk, I would like to share some of what I have learned during this journey, such as the crucial role of notation and the need to start with a unifying mathematical framework for statistical learning. As soon as these are in place, topics such as (un)supervised learning, kernel methods, regularisation, support vector machines, decision trees, and neural networks, can be described in a natural and consistent manner.