My research interests include probabilistic and approximate algorithms, game AI, graph theory, computational photography, computer vision and machine learning along with its countless applications. During my PhD, I am focusing on creating efficient intelligent algorithms for use in image and video processing and perceptual metrics for evaluation. More generally, I am working on deep generative models.
Our work with convolutional generative adversarial neural networks has reached state-of-the-art results for the task of single image super-resolution in both quantitative and qualitative benchmarks. We have further reached state-of-the-art results in video super-resolution. A further line of work entails evaluating generative models such as GANs and improving their performance.
Please see the Projects tab for more information.
As I am close to graduation, I am open to speak about possible opportunities in academia and research positions in the industry. Please contact via email.
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative
models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.
Advances in Neural Information Processing Systems 31, pages: 5234-5243, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)
eiSajjadi, M. S. M., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.
Assessing Generative Models via Precision and RecallAdvances in Neural Information Processing Systems 31, pages: 5234-5243, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32th Annual Conference on Neural Information Processing Systems, December 2018 (conference)
15th European Conference on Computer Vision (ECCV), Part III, 11207, pages: 111-127, Lecture Notes in Computer Science, (Editors: Vittorio Ferrari, Martial Hebert,Cristian Sminchisescu and Yair Weiss), Springer, September 2018 (conference)
Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 4448-4456, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)
Pattern Recognition - 38th German Conference (GCPR), 9796, pages: 426-438, Lecture Notes in Computer Science, (Editors: Rosenhahn, B. and Andres, B.), Springer International Publishing, September 2016 (conference)
Proceedings of the 3rd ACM conference on Learning @ Scale, pages: 369-378, (Editors: Haywood, J. and Aleven, V. and Kay, J. and Roll, I.), ACM, L@S, April 2016, (An earlier version of this paper had been presented at the ICML 2015 workshop for Machine Learning for Education.) (conference)
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems