I am a PhD student in the Autonomous Vision Group of Andreas Geiger and scholar as well as student representative of the International Max-Planck Research School for Intelligent Systems (IMPRS-IS). My current research is on reflectance estimation in the area of Computer Vision and Computer Graphics.
Reflections reveal a lot of information on the shape of an object and the material it consists of. Knowledge of it allows better scene understanding, refinement of 3D reconstructions and is needed for rendering virtual scenes. I am working on inferring the Bidirectional Reflectance Distribution Function (BRDF) from RGB video input and on how to use this information for photo-realistic indoor scene reconstructions.
2017 ongoing PhD at the MPI-IS in Tübingen, Autonomous Vision Group
Supervisor: Prof. Dr. Andras Geiger
Scholar of the International Max-Planck Research School for Intelligent Systems
2015 -2017 M.Sc. in Mathematics
Goethe-University Frankfurt am Main
Thesis: Biquadratic Forms in Semi-Definite Optimization
2012-2015 B.Sc. in Mathematics
Goethe-University Frankfurt am Main
2017 Pre-doc Summer School on Learning Systems
Max-Planck ETH Center for Learning Systems, Zurich
2015-2016 Stockholm University, Sweden
Exchange student for one year
2013-2015 Teaching Assistance for Programming, Optimization and Discrete Mathematics
In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems