Machine learning (ML) approaches often operate under the assumption of independent and identically distributed (i.i.d.) random variables, and many of the impressive recent achievements can be phrased as supervised learning problems in such an i.i.d. setting. Due to changes in the environment, different measurement devices, varying experimental conditions, or sample selection bias, this i.i.d. assumption is, however, often violated in practice. This becomes particularly relevant when we move beyond a single dataset or task and instead aim to fuse the many different sources available in the age of big data .
Causal modelling offers a principled and mathematical way of reasoning about similarities and differences between distributions arising, e.g., from the aforementioned i.i.d. violations. In particular, it takes the perspective of systems (of variables) being comprised of independent modules, or mechanisms, which are robust, or invariant, across different conditions, even if other parts of the system change . This view suggests independent causal mechanisms as the objects to study for learning to perform a variety of tasks under different conditions without forgetting them over time.
In my PhD studies, I explore whether and how switching from the more traditional prediction-based paradigm to focusing instead on learning independent causal mechanisms can be beneficial for non-i.i.d. ML tasks such as transfer-, meta-, or continual learning. Another focus is causal representation learning, i.e., learning causal generative models over a small number of meaningful causal variables from high-dimensional observations. I am also interested in using counterfactual reasoning to better understand and interpret ML models (explainable AI), and how to learn causal relations from heterogeneous data (causal discovery).
 Bareinboim, E., & Pearl, J. (2016). Causal inference and the data-fusion problem. Proceedings of the National Academy of Sciences, 113(27), 7345-7352.
 Peters, J., Janzing, D., & Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. MIT press.
Originally from the beautiful Hamburg in northern Germany, I studied Mathematics (BSc + MSci) at Imperial College London (2012 - 2016) and Artificial Intelligence (MSc) at UPC Barcelona (2016 - 2018). During an Erasmus exchange to TU Delft, I completed my second master's thesis under the supervision of Prof. Marco Loog.
In October 2018, I joined the Cambridge-Tübingen programme where I am jointly supervised by Adrian Weller in Cambridge and Bernhard Schölkopf in Tübingen. I am fortunate to be funded by a Cambridge-Tübingen PhD fellowship with generous support from Amazon. After spending my first year in Cambridge (2018 - 2019), I completed a 6-months research internship at Amazon in Tübingen and then officially joined the Empirical Inference Department at the Max Planck Institute for Intelligent Systems. My contract here ends in spring 2023.
Advances in Neural Information Processing Systems 32 (NIPS 2019), NeurIPS, Neural Information Processing Systems 2019 - Workshop Do the right thing: machine learning and causal inference for improved decision making, December 2019 (conference)
NeurIPS 2019 Workshop Do the right thing: machine learning and causal inference for improved decision making, NeurIPS, NeurIPS 2019 Workshop Do the right thing: machine learning and causal inference for improved decision making, December 2019 (conference)
In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 89, pages: 1361-1369, (Editors: Kamalika Chaudhuri and Masashi Sugiyama), PMLR, April 2019 (inproceedings)
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems