In an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings, much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures. We treat the neural network input and output as random variables, and consider group invariance from the perspective of probabilistic symmetry. Drawing on tools from probability and statistics, we establish a link between functional and probabilistic symmetry, and obtain functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Those representations characterize the structure of neural networks that can be used to represent such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We develop the details of the general program for exchangeable sequences and arrays, recovering a number of recent examples as special cases. This is work in collaboration with Yee Whye Teh. https://arxiv.org/abs/1901.06082
Organizers: Isabel Valera
Gaussian Processes are a principled, practical, probabilistic approach to learning in flexible non-parametric models and have found numerous applications in regression, classification, unsupervised learning and reinforcement learning. Inference, learning and prediction can be done exactly on small data sets with Gaussian likelihood. In more realistic application with large scale data and more complicated likelihoods approximations are necessary. The variational framework for approximate inference in Gaussian processes has emerged recently as a highly effective and practical tool. I will review and demonstrate the capabilities of this framework applied to non-linear state space models.
Organizers: Philipp Hennig
Taking advantages of state-of-art micro/nanotechnologies, fascinating functional biomaterials and integrated biosystems, we can address numerous important problems in fundamental biology as well as clinical applications in cancer diagnosis and treatment.
Organizers: Peer Fischer
Exciting talk on modeling anguilliform swimming, robotic testing.
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Artificial Intelligence in that (1) how we can generalize the image classification models to the cases when no visual training data is available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a class label,explain why the predicted label is appropriate for the image whereas another label is not.
Organizers: Andreas Geiger
Complex shapes can can be summarized using a coarsely defined structure which is consistent and robust across variety of observations. However, existing synthesis techniques do not consider structural decomposition during synthesis, causing generation of implausible or structurally unrealistic shapes. We explore how structure-aware reasoning can benefit existing generative techniques for complex 2D and 3D shapes. We evaluate our methodology on a 3D dataset of chairs and a 2D dataset of typefaces.
Organizers: Sergi Pujades
Touch requires mechanical contact and is governed by the physics of friction. Frictional movements may convert the continuous 3D profile of textural objects into discrete and probabilistic movement events of the viscoelastic integument (skin/hair) called stick-slip movements (slips). This complex transformation may further be determined by the microanatomy and the active movements of the sensing organ. Thus, the integument may realize a computation, transforming the tactile world in a context dependent way - long before it even activates neurons. The possibility that the tactile world is perceived through these ‘fractured goggles’ of friction has been largely ignored by classical perceptual and neuro-scientific work. I will present biomechanical, neuro-scientific, and behavioral work supporting the slip hypothesis.
Organizers: Katherine J. Kuchenbecker
Optimal control problems are often too complex to solve analytically. Computational methods usually replace the continuous infinite dimensional problem by a finite dimensional discrete approximation. The talk will survey classical discretization techniques based on a Runge-Kutta approximation to the differential equations (an h-method) and then introduce recent approximations based on collocation at the roots of orthogonal polynomials (a p-method). The best approximations are often achieved using an hp-framework that combines the best features of both approaches. Numerical results using the GPOPS-II (General Pseudospectral Optimal Control Software package) will be presented.
Organizers: Jia-Jie Zhu
The Gaussian mechanism is an essential building block used in multitude of differentially private data analysis algorithms. In this talk I will revisit the classical analysis of the Gaussian mechanism and show it has several important limitations. For example, our analysis reveals that the variance formula for the original mechanism is far from tight in the high privacy regime and that it cannot be extended to the low privacy regime. We address these limitations by developing a new Gaussian mechanism whose variance is optimally calibrated by solving an equation involving the Gaussian cumulative density function. Our analysis side-steps the use of tail bounds approximations and relies on a novel characterisation of differential privacy that might be of independent interest. We numerically show that analytical calibration removes at least a third of the variance of the noise compared to the classical Gaussian mechanism. We also propose to equip the Gaussian mechanism with a post-processing step based on adaptive denoising estimators by leveraging that the variance of the perturbation is known. Experiments with synthetic and real data show that this denoising step yields dramatic accuracy improvements in the high-dimensional regime. Based on joint work with Y.-X. Wang to appear at ICML 2018. Pre-print: https://arxiv.org/abs/1805.06530
I will describe recent research in my lab on haptics and robotics. It has been a longstanding challenge to realize engineering systems that can match the amazing perceptual and motor feats of biological systems for touch, including the human hand. Some of the difficulties of meeting this objective can be traced to our limited understanding of the mechanics, and to the high dimensionality of the signals, and to the multiple length and time scales - physical regimes - involved. An additional source of richness and complication arises from the sensitive dependence of what we feel on what we do, i.e. on the tight coupling between touch-elicited mechanical signals, object contacts, and actions. I will describe research in my lab that has aimed at addressing these challenges, and will explain how the results are guiding the development of new technologies for haptics, wearable computing, and robotics.
Organizers: Katherine J. Kuchenbecker