This talk will survey recent work to achieve multi-contact locomotion control of humanoid and legged robots. I will start by presenting some results on robust optimization-based control. We exploited robust optimization techniques, either stochastic or worst-case, to improve the robustness of Task-Space Inverse Dynamics (TSID), a well-known control framework for legged robots. We modeled uncertainties in the joint torques, and we immunized the constraints of the system to any of the realizations of these uncertainties. We also applied the same methodology to ensure the balance of the robot despite bounded errors in the its inertial parameters. Extensive simulations in a realistic environment show that the proposed robust controllers greatly outperform the classic one. Then I will present preliminary results on a new capturability criterion for legged robots in multi-contact. "N-step capturability" is the ability of a system to come to a stop by taking N or fewer steps. Simplified models to compute N-step capturability already exist and are widely used, but they are limited to locomotion on flat terrains. We propose a new efficient algorithm to compute 0-step capturability for a robot in arbitrary contact scenarios. Finally, I will present our recent efforts to transfer the above-mentioned techniques to the real humanoid robot HRP-2, on which we recently implemented joint torque control.
Organizers: Ludovic Righetti
More than half of the persons with spinal cord injuries (SCI) are suffering from impairments of both hands, which results in a tremendous decrease of quality of life and represents a major barrier for inclusion in society. Functional restoration is possible with neuroprostheses (NPs) based on functional electrical stimulation (FES). A Brain-Computer Interface provides a means of control for such neuroprosthetics since users have limited abilities to use traditional assistive devices. This talk presents our early research on BCI-based NP control based on motor imagery, discusses hybrid BCI solutions and shows our work and effort on movement trajectory decoding. An outlook to future BCI applications will conclude this talk.
Organizers: Moritz Grosse-Wentrup
Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what often ends up being time-consuming task specific programming. In this talk I will describe the ideas behind two promising types of robot learning: First I will discuss apprenticeship learning, in which robots learn from human demonstrations, and which has enabled autonomous helicopter aerobatics, knot tying, basic suturing, and cloth manipulation. Then I will discuss deep reinforcement learning, in which robots learn through their own trial and error, and which has enabled learning locomotion as well as a range of assembly and manipulation tasks.
Organizers: Stefan Schaal
I will describe a series of work that aims to automatically understand images of animals and plants. I will begin by describing recent work that uses Bounded Distortion matching to model pose variation in animals. Using a generic 3D model of an animal and multiple images of different individuals in various poses, we construct a model that captures the way in which the animal articulates. This is done by solving for the pose of the template that matches each image while simultaneously solving for the stiffness of each tetrahedron of the model. We minimize an L1 norm on stiffness, producing a model that bends easily at joints, but that captures the rigidity of other parts of the animal. We show that this model can determine the pose of animals such as cats in a wide range of positions. Bounded distortion forms a core part of the matching between 3D model and 2D images. I will also show that Bounded Distortion can be used for 2D matching. We use it to find corresponding features in images very robustly, optimizing an L0 distance to maximize the number of matched features, while bounding the amount of non-rigid variation between the images. We demonstrate the use of this approach in matching non-rigid objects and in wide-baseline matching of features. I will also give an overview of a method for identifying the parts of animals in images, to produce an automatic correspondence between images of animals. Building on these correspondences we develop methods for recognizing the species of a bird, or the breed of a dog. We use these recognition algorithms to construct electronic field guides. I will describe three field guides that we have published, Birdsnap, Dogsnap, and Leafsnap. Leafsnap identifies the species of trees using shape-based matching to compare images of leaves. Leafsnap has been downloaded by over 1.5 million users, and has been used in schools and in biodiversity studies. This work has been done in collaboration with many University of Maryland students and with groups at Columbia University, the Smithsonian Institution National Museum of Natural History, and the Weizmann Institute.
Organizers: Stephan Streuber
The design of tangent vector fields on discrete surfaces is a basic building block for many geometry processing applications, such as surface remeshing, parameterization and architectural geometric design. Many applications require the design of multiple vector fields (vector sets) coupled in a nontrivial way; for example, sets of more than two vectors are used for meshing of triangular, quadrilateral and hexagonal meshes. In this talk, a new, polynomial-based representation for general unordered vector sets will be presented. Using this representation we can efficiently interpolate user provided vector constraints to design vector set fields. Our interpolation scheme will require neither integer period jumps, nor explicit pairings of vectors between adjacent sets on a manifold, as is common in field design literature. Several extensions to the basic interpolation scheme are possible, which make our representation applicable in various scenarios; in this talk, we will focus on generating vector set fields particularly suited for mesh parameterization and show applications in architectural modeling.
Organizers: Gerard Pons-Moll
The recent amazing success of deep learning has been mainly in discriminative learning, that is, classification and regression. An important factor for this success has been, besides Moore's law, the availability of large labeled datasets. However, it is not clear whether in the future the amount of available labels grows as fast as the amount of unlabeled data, providing one argument to be interested in unsupervised and semi-supervised learning. Besides this there are a number of other reasons why unsupervised learning is still important, such as the fact that data in the life sciences often has many more features than instances (p>>n), the fact that probabilities over feature space are useful for planning and control problems and the fact that complex simulator models are the norm in the sciences. In this talk I will discuss deep generative models that can be jointly trained with discriminative models and that facilitate semi-supervised learning. I will discuss recent progress in learning and Bayesian inference in these "variational auto-encoders". I will then extend the deep generative models to the class of simulators for which no tractable likelihood exists and discuss new Bayesian inference procedures to fit these models to data.
Organizers: Peter Vincent Gehler
During rest, brain activity is intrinsically synchronized between different brain regions, forming networks of coherent activity. These functional networks (FNs), consisting of multiple regions widely distributed across lobes and hemispheres, appear to be a fundamental theme of neural organization in mammalian brains. Despite hundreds of studies detailing this phenomenon, the genetic and molecular mechanisms supporting these functional networks remain undefined. Previous work has mostly focused on polymorphisms in candidate genes, or used a twin study approach to demonstrate heritability of aspects of resting-state connectivity. The recent availability of high spatial resolution post-mortem brain gene expression datasets, together with several large-scale imaging genetics datasets, which contain joint in-vivo functional brain imaging data and genotype data for several hundred subjects, opens intriguing data analysis avenues. Using novel cross-modal graph-based statistics, we show that functional brain networks defined with resting-state fMRI can be recapitulated using measures of correlated gene expression, and that the relationship is not driven by gross tissue types. The set of genes we identify is significantly enriched for certain types of ion channels and synapse-related genes. We validate results by showing that polymorphisms in this set significantly correlate with alterations of in-vivo resting-state functional connectivity in a group of 259 adolescents. We further validate results on another species by showing that our list of genes is significantly associated with neuronal connectivity in the mouse brain. These results provide convergent, multimodal evidence that resting-state functional networks emerge from the orchestrated activity of dozens of genes linked to ion channel activity and synaptic function. Functional brain networks are also known to be perturbed in a variety of neurological and neuropsychological disorders, including Alzheimer's and schizophrenia. Given this link between disease and networks, and the fact that many brain disorders have genetic contributions, it seems that functional brain networks may be an interesting endophenotype for clinical use. We discuss the translational potential of the imaging genomics techniques we developed.
Unknown information required to plan grasps such as object shape and pose needs to be extracted from the environment through sensors. However, sensory measurements are noisy and associated with a degree of uncertainty. Furthermore, object parameters relevant to grasp planning may not be accurately estimated, e.g., friction and mass. In real-world settings, these issues can lead to grasp failures with serious consequences. I will talk about learning approaches using real sensory data, e.g., visual and tactile, to assess grasp success (discriminative and generative) that can be used to trigger plan corrections. I will also present a probabilistic approach for learning object models based on visual and tactile data through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape.
Organizers: Jeannette Bohg
Organizers: Moritz Grosse-Wentrup
Human diseases show considerable heterogeneity at the molecular level. Such heterogeneity is central to personalized medicine efforts that seek to exploit molecular data to better understand disease biology and inform clinical decision making. An emerging notion is that diseases and disease subgroups may differ not only at the level of mean molecular abundance, but also with respect to patterns of molecular interplay. I will discuss our ongoing efforts to develop methods to investigate such heterogeneity, with an emphasis on some high-dimensional aspects.
Our eyes typically anticipate the next action module in a sequence, by targeting the relevant object for the following step. Yet, how the final goal, or the way we intend to achieve it, is reflected in the early visual exploration of each object has been less investigated. In a series of experiments we considered how scan paths on real-world objects would be affected by different factors such as task, object orientation, familiarity, or low-level saliency, hence revealing which components can account for fixation target selection during eye-hand coordination. In each experiment, the fixation distribution differed significantly depending on the final task (e.g. lifting vs. opening). Already from the second fixation prior to reaching the object the eyes targeted the task-relevant regions and these significantly correlated with salient features like oriented edges. Familiarity had a significant effect when different tools were used as stimuli, with more fixations concentrating on the active end of unfamiliar tools. Object orientation (upright or inverse) and anticipation of the final comfort state determined the height of the fixations on the objects. Scan paths dynamics, thus, reveal how action is planned, offering indirect insight in the structuring of complex behaviour and the understanding of how task and affordance perception relates to motor control.
Organizers: Jeannette Bohg