Institute Talks

Structured Deep Visual Dynamics Models for Robot Manipulation

Talk
  • 23 October 2017 • 10:00 11:15
  • Arunkumar Byravan
  • AMD meeting room

The ability to predict how an environment changes based on forces applied to it is fundamental for a robot to achieve specific goals. Traditionally in robotics, this problem is addressed through the use of pre-specified models or physics simulators, taking advantage of prior knowledge of the problem structure. While these models are general and have broad applicability, they depend on accurate estimation of model parameters such as object shape, mass, friction etc. On the other hand, learning based methods such as Predictive State Representations or more recent deep learning approaches have looked at learning these models directly from raw perceptual information in a model-free manner. These methods operate on raw data without any intermediate parameter estimation, but lack the structure and generality of model-based techniques. In this talk, I will present some work that tries to bridge the gap between these two paradigms by proposing a specific class of deep visual dynamics models (SE3-Nets) that explicitly encode strong physical and 3D geometric priors (specifically, rigid body dynamics) in their structure. As opposed to traditional deep models that reason about dynamics/motion a pixel level, we show that the physical priors implicit in our network architectures enable them to reason about dynamics at the object level - our network learns to identify objects in the scene and to predict rigid body rotation and translation per object. I will present results on applying our deep architectures to two specific problems: 1) Modeling scene dynamics where the task is to predict future depth observations given the current observation and an applied action and 2) Real-time visuomotor control of a Baxter manipulator based only on raw depth data. We show that: 1) Our proposed architectures significantly outperform baseline deep models on dynamics modelling and 2) Our architectures perform comparably or better than baseline models for visuomotor control while operating at camera rates (30Hz) and relying on far less information.

Organizers: Franzi Meier

Modern Optimization for Structured Machine Learning

IS Colloquium
  • 23 October 2017 • 11:15 12:15
  • Simon Lacoste-Julien
  • IS Lecture Hall

Machine learning has become a popular application domain for modern optimization techniques, pushing its algorithmic frontier. The need for large scale optimization algorithms which can handle millions of dimensions or data points, typical for the big data era, have brought a resurgence of interest for first order algorithms, making us revisit the venerable stochastic gradient method [Robbins-Monro 1951] as well as the Frank-Wolfe algorithm [Frank-Wolfe 1956]. In this talk, I will review recent improvements on these algorithms which can exploit the structure of modern machine learning approaches. I will explain why the Frank-Wolfe algorithm has become so popular lately; and present a surprising tweak on the stochastic gradient method which yields a fast linear convergence rate. Motivating applications will include weakly supervised video analysis and structured prediction problems.

Organizers: Philipp Hennig

  • Jiri Matas
  • Max Planck House Lecture Hall

Computer vision problems often involve optimization of two quantities, one of which is time. Such problems can be formulated as time-constrained optimization or performance-constrained search for the fastest algorithm. We show that it is possible to obtain quasi-optimal time-constrained solutions to some vision problems by applying Wald's theory of sequential decision-making. Wald assumes independence of observation, which is rarely true in computer vision. We address the problem by combining Wald's sequential probability ratio test and AdaBoost. The solution, called the WaldBoost, can be viewed as a principled way to build a close-to-optimal “cascade of classifiers” of the Viola-Jones type. The approach will be demonstrated on four tasks: (i) face detection, (ii) establishing reliable correspondences between image, (iii) real-time detection of interest points and (iv) model search and outlier detection using RANSAC. In the face detection problem, the objective is learning the fastest detector satisfying constraints on false positive and false negative rates. The correspondence pruning addresses the problem of fast selection with a predefined false negative rated. In interest point problem we show how a fast implementation of known detectors can obtained by Waldboost. The “mimicked” detectors provide a training set of positive and negative examples of interest ponts and WaldBoost learns a detector, (significantly) faster than the providers of the training set, formed as a linear combination of efficiently computable feature. In RANSAC, we show how to exploit Wald's test in a randomised model verification procedure to obtain an algorithm significantly faster than deterministic verification yet with equivalent probabilistic guarantees of correctness.

Organizers: Gerard Pons-Moll


Scalable Surface-Based Stereo Matching

Talk
  • 10 April 2014 • 14:00:00
  • Daniel Scharstein
  • MRC seminar room (0.A.03)

Stereo matching -- establishing correspondences between images taken from nearby viewpoints -- is one of the oldest problems in computer vision.  While impressive progress has been made over the last two decades, most current stereo methods do not scale to the high-resolution images taken by today's cameras since they require searching the full space of all possible disparity hypotheses over all pixels.

In this talk I will describe a new scalable stereo method that only evaluates a small portion of the search space.  The method first generates plane hypotheses from matched sparse features, which are then refined into surface hypotheses using local slanted plane sweeps over a narrow disparity range.  Finally, each pixel is assigned to one of the local surface hypotheses. The technique achieves significant speedups over previous algorithms and achieves state-of-the-art accuracy on high-resolution stereo pairs of up to 19 megapixels.

I will also present a new dataset of high-resolution stereo pairs with subpixel-accurate ground truth, and provide a brief outlook on the upcoming new version of the Middlebury stereo benchmark.


  • Simo Särkkä
  • Max Planck House Lecture Hall

Gaussian process regression is a non-parametric Bayesian machine learning paradigm, where instead of estimating parameters of fixed-form functions, we model the whole unknown functions as Gaussian processes. Gaussian processes are also commonly used for representing uncertainties in models of dynamic systems in many applications such as tracking, navigation, and automatic control systems. The latter models are often formulated as state-space models, where the use of non-linear Kalman filter type of methods is common. The aim of this talk is to discuss connections of Kalman filtering methods and Gaussian process regression. In particular, I discuss representations of Gaussian processes as state-space models, which enable the use of computationally efficient Kalman-filter-based (or more general Bayesian-filter-based) solutions to Gaussian process regression problems. This also allows for computationally efficient inference in latent force models (LFM), which are models combining first-principles mechanical models with non-parametric Gaussian process regression models.

Organizers: Philipp Hennig


  • Rainer Dahlhaus
  • Max Planck House Lecture Hall

(joint work with Jan. C. Neddermeyer) A technique for online estimation of spot volatility for high-frequency data is developed. The algorithm works directly on the transaction data and updates the volatility estimate immediately after the occurrence of a new transaction. Furthermore, a nonlinear market microstructure noise model is proposed that reproduces several stylized facts of high frequency data. A computationally efficient particle filter is used that allows for the approximation of the unknown efficient prices and, in combination with a recursive EM algorithm, for the estimation of the volatility curve. We neither assume that the transaction times are equidistant nor do we use interpolated prices. We also make a distinction between volatility per time unit and volatility per transaction and provide estimators for both. More precisely we use a model with random time change where spot volatility is decomposed into spot volatility per transaction times the trading intensity - thus highlighting the influence of trading intensity on volatility.

Organizers: Michel Besserve


Simulation in physical scene understanding

IS Colloquium
  • 28 March 2014 • 11:15 12:45
  • Peter Battaglia
  • Max Planck House Lecture Hall

Our ability to understand a scene is central to how we interact with our environment and with each other. Classic research on visual scene perception has focused on how people "know what is where by looking", but this talk will explore people's ability to infer the "hows" and "whys" of their world, and in particular, how they form a physical understanding of a scene. From a glance we can know so much: not only what objects are where, but whether they are movable, fragile, slimy, or hot; whether they were made by hand, by machine, or by nature; whether they are broken and how they could be repaired; and so on. I posit that these common-sense physical intuitions are made possible by the brain's sophisticated capacity for constructing and manipulating a rich mental representation of a scene via a mechanism of approximate probabilistic simulation -- in short, a physics engine in the head. I will present a series of recent and ongoing studies that develop and test this computational model in a variety of prediction, inference, and planning tasks. Our model captures various aspects of people's experimental judgments, including the accuracy of their performance as well as several illusions and errors. These results help explain core aspects of human mental models that are instrumental to how we understand and act in our everyday world. They also open new directions for developing robotic and AI systems that can perceive, reason, and act the way people do.

Organizers: Michel Besserve


Video-based Analysis of Humans and Their Behavior

Talk
  • 27 March 2014 • 14:00:00
  • Stan Sclaroff
  • MRC Seminar room (0.A.03)

This talk will give an overview of some of the research in the Image and Video Computing Group at Boston University related to image- and video-based analysis of humans and their behavior, including: tracking humans, localizing and classifying actions in space-time, exploiting contextual cues in action classification, estimating human pose from images, analyzing the communicative behavior of children in video, and sign language recognition and retrieval.

Collaborators in this work include (in alphabetical order): Vassilis Athitsos, Qinxun Bai, Margrit Betke, R. Gokberk Cinbis, Kun He, Nazli Ikizler-Cinbis, Hao Jiang, Liliana Lo Presti, Shugao Ma, Joan Nash, Carol Neidle, Agata Rozga, Tai-peng Tian, Ashwin Thangali, Zheng Wu, and Jianming Zhang.


Multi-View Perception of Dynamic Scenes

IS Colloquium
  • 20 March 2014 • 11:15:00 12:30
  • Edmond Boyer
  • Max Planck House Lecture Hall

The INRIA MORPHEO research team is working on the perception of moving shapes using multiple camera systems. Such systems allows to recover dense information on shapes and their motions using visual cues. This opens avenues for research investigations on how to model, understand and animate real dynamic shapes using several videos. In this talk I will more particularly focus on recent activities in the team on two fundamental components of the multi-view perception of dynamic scenes that are: (i) the recovery of time-consistent shape models or shape tracking and (ii) the segmentation of objects in multiple views and over time. 
 

Organizers: Gerard Pons-Moll


  • Prof. Yoshinari Kameda
  • MRC seminar room (0.A.03)

This talk presents our 3D video production method by which a user can watch a  real game from any free viewpoint. Players in the game are captured by 10 cameras and they are reproduced three dimensionally by billboard based representation in real time. Upon producing the 3D video, we have also worked on good user interface that can enable people move the camera intuitively. As the speaker is also working on wide variety of computer vision to augmented reality, selected recent works will be also introduced briefly.

Dr. Yoshinari Kameda started his research from human pose estimation as his Ph.D thesis, then he expands his interested topics from computer vision, human interface, and augmented reality.
He is now an associate professor at University of Tsukuba.
He is also a member of Center for Computational Science of U-Tsukuba where some outstanding super-computer s are in operation.
He served International Symposium on Mixed and Augmented Reality as a area chair for four years (2007-2010).


  • Christof Hoppe
  • MRC Seminar Room

3D reconstruction from 2D still-images (Structure-from-Motion) has reached maturity and together with new image acquisition devices like Micro Aerial Vehicles (MAV), new interesting application scenarios arise. However, acquiring an image set which is suited for a complete and accurate reconstruction is even for expert users a non-trivial task. To overcome this problem, we propose two different methods. In the first part of the talk, we will present a SfM method that performs sparse reconstruction of 10Mpx still-images and a surface extraction from sparse and noisy 3D point clouds in real-time. We therefore developed a novel efficient image localisation method and a robust surface extraction that works in a fully incremental manner directly on sparse 3D points without a densification step. The real-time feedback of the reconstruction quality the enables the user to control the acquisition process interactively. In the second part, we will present ongoing work of a novel view planning method that is designed to deliver a set of images that can be processed by today's multi-view reconstruction pipelines.


  • Bernt Schiele
  • Max Planck House Lecture Hall

This talk will highlight recent progress on two fronts. First, we will talk about a novel image-conditioned person model that allows for effective articulated pose estimation in realistic scenarios. Second, we describe our work towards activity recognition and the ability to describe video content with natural language. 

Both efforts are part of a longer-term agenda towards visual scene understanding. While visual scene understanding has long been advocated as the "holy grail" of computer vision, we believe it is time to address this challenge again,  based on the progress in recent years.