I am working in the Empirical Inference group as a Ph.D. student associated with the International Max Planck Research School for Intelligent Systems (IMPRS-IS).
My research focuses on theoretical questions in machine learning, with a particular interest in deep learning. I am investigating how recent development in probability theory can be deployed to provide a better understanding in areas of machine learning currently driven by an extremely empirical approach..
Specifically I am using techniques from topics like "concentration inequalities", "optimal transport" and "empirical processes" to provide a better theoretical understanding of so called "adversarial examples". Adversarial examples are specifically crafted modifications of real inputs (images, sounds, etc) that humans find indistinguishable from the original, but that are consistently misclassified by our programs. For example an image of a panda will be correctly classified but adding an imperceptible noise to it will lead the algorithm to classify it as a potato (Here it is possible to find a quick survey). They were discovered around 2013 and present a formidable challenge towards the reliable deployment of deep learning models in the real world. Despite their importance, we still lack a good theoretical understanding of why adversarial examples emerge and why neural networks are so sensible to them. So it is probable that shedding light on the mysteries of adversarial examples will lead to a much deeper understanding of neural network. All these reasons and many others show that this is an important and exciting area of research.
Machine Learning Deep Learning Mathematics Statistical Learning Theory
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems