Header logo is
Institute Talks

Robust Discrimination and Generation of Faces using Compact, Disentangled Embeddings

Talk
  • 22 August 2019 • 15:00 16:00
  • Björn Browatzki
  • PS Aquarium

Current solutions to discriminative and generative tasks in computer vision exist separately and often lack interpretability and explainability. Using faces as our application domain, here we present an architecture that is based around two core ideas that address these issues: first, our framework learns an unsupervised, low-dimensional embedding of faces using an adversarial autoencoder that is able to synthesize high-quality face images. Second, a supervised disentanglement splits the low-dimensional embedding vector into four sub-vectors, each of which contains separated information about one of four major face attributes (pose, identity, expression, and style) that can be used both for discriminative tasks and for manipulating all four attributes in an explicit manner. The resulting architecture achieves state-of-the-art image quality, good discrimination and face retrieval results on each of the four attributes, and supports various face editing tasks using a face representation of only 99 dimensions. Finally, we apply the architecture's robust image synthesis capabilities to visually debug label-quality issues in an existing face dataset.

Organizers: Timo Bolkart

Improving the Gaussian Mechanism for Differential Privacy

IS Colloquium
  • 27 June 2018 • 14:15 15:15
  • Borja de Balle Pigem
  • MPI IS lecture hall (N0.002)

The Gaussian mechanism is an essential building block used in multitude of differentially private data analysis algorithms. In this talk I will revisit the classical analysis of the Gaussian mechanism and show it has several important limitations. For example, our analysis reveals that the variance formula for the original mechanism is far from tight in the high privacy regime and that it cannot be extended to the low privacy regime. We address these limitations by developing a new Gaussian mechanism whose variance is optimally calibrated by solving an equation involving the Gaussian cumulative density function. Our analysis side-steps the use of tail bounds approximations and relies on a novel characterisation of differential privacy that might be of independent interest. We numerically show that analytical calibration removes at least a third of the variance of the noise compared to the classical Gaussian mechanism. We also propose to equip the Gaussian mechanism with a post-processing step based on adaptive denoising estimators by leveraging that the variance of the perturbation is known. Experiments with synthetic and real data show that this denoising step yields dramatic accuracy improvements in the high-dimensional regime. Based on joint work with Y.-X. Wang to appear at ICML 2018. Pre-print: https://arxiv.org/abs/1805.06530

Organizers: Michel Besserve Isabel Valera


  • Daniel Häufle
  • 3P2, MPI IS stuttgart

Organizers: Alexander Badri-Sprowitz


Haptic Engineering and Science at Multiple Scales

IS Colloquium
  • 20 June 2018 • 11:00 12:00
  • Yon Visell, PhD
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

I will describe recent research in my lab on haptics and robotics. It has been a longstanding challenge to realize engineering systems that can match the amazing perceptual and motor feats of biological systems for touch, including the human hand. Some of the difficulties of meeting this objective can be traced to our limited understanding of the mechanics, and to the high dimensionality of the signals, and to the multiple length and time scales - physical regimes - involved. An additional source of richness and complication arises from the sensitive dependence of what we feel on what we do, i.e. on the tight coupling between touch-elicited mechanical signals, object contacts, and actions. I will describe research in my lab that has aimed at addressing these challenges, and will explain how the results are guiding the development of new technologies for haptics, wearable computing, and robotics.

Organizers: Katherine J. Kuchenbecker


Less-artificial intelligence

Talk
  • 18 June 2018 • 15:00 16:00
  • Prof. Dr. Matthias Bethge
  • MPI-IS Stuttgart - 2R04


  • Karl Rohe
  • MPI IS Lecture Hall (N0.002)

This paper uses the relationship between graph conductance and spectral clustering to study (i) the failures of spectral clustering and (ii) the benefits of regularization. The explanation is simple. Sparse and stochastic graphs create a lot of small trees that are connected to the core of the graph by only one edge. Graph conductance is sensitive to these noisy "dangling sets." Spectral clustering inherits this sensitivity. The second part of the paper starts from a previously proposed form of regularized spectral clustering and shows that it is related to the graph conductance on a "regularized graph." We call the conductance on the regularized graph CoreCut. Based upon previous arguments that relate graph conductance to spectral clustering (e.g. Cheeger inequality), minimizing CoreCut relaxes to regularized spectral clustering. Simple inspection of CoreCut reveals why it is less sensitive to small cuts in the graph. Together, these results show that unbalanced partitions from spectral clustering can be understood as overfitting to noise in the periphery of a sparse and stochastic graph. Regularization fixes this overfitting. In addition to this statistical benefit, these results also demonstrate how regularization can improve the computational speed of spectral clustering. We provide simulations and data examples to illustrate these results.

Organizers: Damien Garreau


  • Adrián Javaloy
  • S2 seminar room

The problem of text normalization is simple to understand: transform a given arbitrary text into its spoken form. In the context of text-to-speech systems – that we will focus on – this can be exemplified by turning the text “$200” into “two hundred dollars”. Lately, the interest of solving this problem with deep learning techniques has raised since it is a highly context-dependent problem that is still being solved by ad-hoc solutions. So much so that Google even started a contest in the web Kaggle to solve this problem. In this talk we will see how this problem has been approached as part of a Master thesis. Namely, the problem is tackled as if it were an automatic translation problem from English to normalized English, and so the architecture proposed is a neural machine translation architecture with the addition of traditional attention mechanisms. This network is typically composed of an encoder and a decoder, where both of them are multi-layer LSTM networks. As part of this work, and with the aim of proving the feasibility of convolutional neural networks in natural-language processing problems, we propose and compare different architectures for the encoder based on convolutional networks. In particular, we propose a new architecture called Causal Feature Extractor which proves to be a great encoder as well as an attention-friendly architecture.

Organizers: Philipp Hennig


  • Prof. Andrew Blake
  • Ground Floor Seminar Room N0.002

Organizers: Ahmed Osman


Learning Representations for Hyperparameter Transfer Learning

IS Colloquium
  • 11 June 2018 • 11:15 12:15
  • Cédric Archambeau
  • MPI IS Lecture Hall (N0.002)

Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization. Typically, BO relies on conventional Gaussian process regression, whose algorithmic complexity is cubic in the number of evaluations. As a result, Gaussian process-based BO cannot leverage large numbers of past function evaluations, for example, to warm-start related BO runs. After a brief intro to BO and an overview of several use cases at Amazon, I will discuss a multi-task adaptive Bayesian linear regression model, whose computational complexity is attractive (linear) in the number of function evaluations and able to leverage information of related black-box functions through a shared deep neural net. Experimental results show that the neural net learns a representation suitable for warm-starting related BO runs and that they can be accelerated when the target black-box function (e.g., validation loss) is learned together with other related signals (e.g., training loss). The proposed method was found to be at least one order of magnitude faster than competing neural network-based methods recently published in the literature. This is joint work with Valerio Perrone, Rodolphe Jenatton, and Matthias Seeger.

Organizers: Isabel Valera


  • Prof. Martin Spindler
  • MPI IS Lecture Hall (N0.002)

In this talk first an introduction to the double machine learning framework is given. This allows inference on parameters in high-dimensional settings. Then, two applications are given, namely transformation models and Gaussian graphical models in high-dimensional settings. Both kind of models are widely used by practitioners. As high-dimensional data sets become more and more available, it is important to allow situations where the number of parameters is large compared to the sample size.


  • Prof. Martin Spindler
  • MPI IS Lecture Hall (N0.002)

In this talk first an introduction to the double machine learning framework is given. This allows inference on parameters in high-dimensional settings. Then, two applications are given, namely transformation models and Gaussian graphical models in high-dimensional settings. Both kind of models are widely used by practitioners. As high-dimensional data sets become more and more available, it is important to allow situations where the number of parameters is large compared to the sample size.

Organizers: Philipp Geiger