With the expanding collection of data, organisations are becoming more and more aware of the potential gain of combining their data. Analytic and predictive tasks, such as classification, perform more accurately if more features or more data records are available, which is why data providers have an interest in joining their datasets and learning from the obtained database. However, this rising interest for federated learning also comes with an increasing concern about security and privacy, both from the consumers whose data is used, and from the data providers who are liable for protecting it. Securely learning a classifier over joint datasets is a first milestone for private multi-party machine learning, and though some literature exists on that topic, systems providing a better security-utility trade-off and more theoretical guarantees are still needed. An ongoing issue is how to deal with the loss gradients, which often need to be revealed in the clear during training. We show that this constitutes an information leak, and present an alternative optimisation strategy that provides additional security guarantees while limiting the decrease in performance of the obtained classifier. Combining an encryption-based and a noise-based approach, the proposed method enables several parties to jointly train a binary classifier over vertically partitioned datasets while keeping their data private.
Organizers: Sebastian Trimpe
This lecture will show some interesting examples how soft body/skin will change your idea of robotic sensing. Soft Robotics does not only discuss about compliance and safety; soft structure will change the way to categorize objects by dynamic exploration and enables the robot to learn sense of slip. Soft Robotics will entirely change your idea how to design sensing and open up a new way to understand human sensing.
Organizers: Ardian Jusufi
The FLEXMIN haptic robotic system is a single-port tele-manipulator for robotic surgery in the small pelvis. Using a transanal approach it allows bi-manual tasks such as grasping, monopolar cutting, and suturing with a footprint of Ø 160 x 240 mm³. Forces up to 5 N in all direction can be applied easily. In addition to provide low latency and highly dynamic control over its movements, high-fidelity haptic feedback was realised using built-in force sensors, lightweight and friction-optimized kinematics as well as dedicated parallel kinematics input devices. After a brief description of the system and some of its key aspects, first evaluation results will be presented. In the second half of the talk the Institute of Medical Device Technology will be presented. The institute was founded in July 2017 and has ever since started a number of projects in the field of biomedical actuation, medical systems and robotics and advanced light microscopy. To illustrate this a few snapshots of bits and pieces will be presented that are condensation nuclei for the future.
Organizers: Katherine Kuchenbecker
The increasing availability of on-line resources and the widespread practice of storing data over the internet arise the problem of their accessibility for visually impaired people. A translation from the visual domain to the available modalities is therefore necessary to study if this access is somewhat possible. However, the translation of information from vision to touch is necessarily impaired due to the superiority of vision during the acquisition process. Yet, compromises exist as visual information can be simplified, sketched. A picture can become a map. An object can become a geometrical shape. Under some circumstances, and with a reasonable loss of generality, touch can substitute vision. In particular, when touch substitutes vision, data can be differentiated by adding a further dimension to the tactile feedback, i.e. extending tactile feedback to three dimensions instead of two. This mode has been chosen because it mimics our natural way of following object profiles with fingers. Specifically, regardless if a hand lying on an object is moving or not, our tactile and proprioceptive systems are both stimulated and tell us something about which object we are manipulating, what can be its shape and size. The goal of this talk is to describe how to exploit tactile stimulation to render digital information non visually, so that cognitive maps associated with this information can be efficiently elicited from visually impaired persons. In particular, the focus is to deliver geometrical information in a learning scenario. Moreover, a completely blind interaction with virtual environment in a learning scenario is something little investigated because visually impaired subjects are often passive agents of exercises with fixed environment constraints. For this reason, during the talk I will provide my personal answer to the question: can visually impaired people manipulate dynamic virtual content through touch? This process is much more challenging than only exploring and learning a virtual content, but at the same time it leads to a more conscious and dynamic creation of the spatial understanding of an environment during tactile exploration.
Organizers: Katherine Kuchenbecker
While robots are already doing a wonderful job as factory workhorses, they are now gradually appearing in our daily environments and offering their services as autonomous cars, delivery drones, helpers in search and rescue and much more. This talk will present some recent highlights in the field of autonomous mobile robotics research and touch on some of the great challenges and opportunities. Legged robots are able to overcome the limitations of wheeled or tracked ground vehicles. ETH’s electrically powered legged quadruped robots are designed for high agility, efficiency and robustness in rough terrain. This is realized through an optimal exploitation of the natural dynamics and serial elastic actuation. For fast inspection of complex environments, flying robots are probably the most efficient and versatile devices. However, the limited payload and computing power of drones renders autonomous navigation quite challenging. Thanks to our custom designed visual-inertial sensor, real-time on-board localization, mapping and planning has become feasible and enables our multi-copters and solar-powered fixed wing drones for advanced rescue and inspection tasks or support in precision farming, even in GPS-denied environments.
In this talk I will present two lines of research which are both applied to the problem of stereo matching. The first line of research tries to make progress on the very traditional problem of stereo matching. In BMVC 11 we presented the PatchmatchStereo work which achieves surprisingly good results with a simple energy function consisting of unary terms only. As optimization engine we used the PatchMatch method, which was designed for image editing purposes. In BMVC 12 we extended this work by adding to the energy function the standard pairwise smoothness terms. The main contribution of this work is the optimization technique, which we call PatchMatch-BeliefPropagation (PMBP). It is a special case of max-product Particle Belief Propagation, with a new sampling schema motivated by Patchmatch.
The method may be suitable for many energy minimization problems in computer vision, which have a non-convex, continuous and potentially high-dimensional label space. The second line of research combines the problem of stereo matching with the problem of object extracting in the scene. We show that both tasks can be solved jointly and boost the performance of each individual task. In particular, stereo matching improves since objects have to obey physical properties, e.g. they are not allowed to fly in the air. Object extracting improves, as expected, since we have additional information about depth in the scene.
Three-dimensional object shape is commonly represented in terms of deformations of a triangular mesh from an exemplar shape. In particular, statistical generative models of human shape deformation are widely used in computer vision, graphics, ergonomics, and anthropometry. Existing statistical models, however, are based on a Euclidean representation of shape deformations. In contrast, we argue that shape has a manifold structure: For example, averaging the shape deformations for two people does not necessarily yield a meaningful shape deformation, nor does the Euclidean difference of these two deformations provide a meaningful measure of shape dissimilarity. Consequently, we define a novel manifold for shape representation, with emphasis on body shapes, using a new Lie group of deformations. This has several advantages.
First, we define triangle deformations exactly, removing non-physical deformations and redundant degrees of freedom common to previous methods. Second, the Riemannian structure of Lie Bodies enables a more meaningful definition of body shape similarity by measuring distance between bodies on the manifold of body shape deformations. Third, the group structure allows the valid composition of deformations.
This is important for models that factor body shape deformations into multiple causes or represent shape as a linear combination of basis shapes. Similarly, interpolation between two mesh deformations results in a meaningful third deformation. Finally body shape variation is modeled using statistics on manifolds. Instead of modeling Euclidean shape variation with Principal Component Analysis we capture shape variation on the manifold using Principal Geodesic Analysis. Our experiments show consistent visual and quantitative advantages of Lie Bodies over traditional Euclidean models of shape deformation and our representation can be easily incorporated into existing methods. This project is part of a larger effort that brings together statistics and geometry to model statistics on manifolds.
Our research on manifold-valued statistics addresses the problem of modeling statistics in curved feature spaces. We try to find the geometrically most natural representations that respect the constraints; e.g. by modeling the data as belonging to a Lie group or a Riemannian manifold. We take a geometric approach as this keeps the focus on good distance measures, which are essential for good statistics. I will also present some recent unpublished results related to statistics on manifolds with broad application.
We, first, address the problems of large scale image classification. We present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We show and interpret the importance of an appropriate vector normalization.
Furthermore, we discuss how to learn given a large number of classes and images with stochastic gradient descent and show results on ImageNet10k. We, then, present a weakly supervised approach for learning human actions modeled as interactions between humans and objects.
Our approach is human-centric: we first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated (only) with the action label.
Finally, we present work on learning object detectors from realworld web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos.
The grand goal of Computer Vision is to generate an automatic description of an image based on its visual content. Category level object detection is an important building block towards such capability. The first part of this talk deals with three established object detection techniques in Computer Vision, their shortcomings and how they are improved. i) Hough Voting methods efficiently handle the high complexity of multi-scale, category-level object detection in cluttered scenes.
However, the primary weakness of this approach is that mutually dependent local observations independently vote for intrinsically global object properties such as object scale. We model the feature dependencies by presenting an objective function that combines various intimately related problems in Hough Voting. ii) Shape is a highly prominent characteristic of objects that human vision utilizes for detecting objects. However, shape poses significant challenges for object detection in cluttered scenes: Object form is an emergent property that cannot be perceived locally but becomes available only once the whole object has been detected. Thus we address the detection of objects and assembling of their shape simultaneously in a Max-Margin Multiple Instance Learning framework, while avoiding fragile bottom-up grouping in query images altogether. iii) Chamfer matching is a widely used technique for detecting objects because of its speed. However, it treats objects as being a mere sum of the distance transformation of all their contour pixels. Also, spurious matches in background clutter is a huge problem for chamfer matching. We address these two issues by a) applying a discriminative approach to distance transformation computation in chamfer matching and b) estimating the accidentalness of a foreground template match by a small dictionary of simple background contours.
The second part of the talk explores the question: what insights can automatic object detection and intra-category object relationships bring to art historians ? It turns out that techniques from Computer Vision have helped the art historians in discovering different artistic workshops within an Upper German manuscript, understanding the variations of art within a particular school of design and studying the transitions across artistic styles by 1-d ordering of objects. Obtaining such insights manually is a tedious task and Computer Vision made the job of art historians easier.
1. Pradeep Yarlagadda and Björn Ommer From Meaningful Contours to Discriminative Object Shape, ECCV 2012.
2. Pradeep Yarlagadda, Angela Eigenstetter and Björn Ommer Learning Discriminative Chamfer Regularization, BMVC 2012.
3. Pradeep Yarlagadda, Antonio Monroy and Björn Ommer Voting by Grouping Dependent Parts, ECCV 2010.
4. Pradeep Yarlagadda, Antonio Monroy, Bernd Carque and Björn Ommer Recognition and Analysis of Objects in Medieval Images, ACCV (e-heritage) 2010.
5. Pradeep Yarlagadda, Antonio Monroy, Bernd Carque and Björn Ommer Top-down Analysis of Low-level Object Relatedness Leading to Semantic Understanding of Medieval Image Collections, Computer Vision and Image Analysis of art SPIE, 2010.
Navigating a car safely through complex environments is considered a relatively easy task for humans. Computer algorithms, however, can't nearly match human performance and often rely on 3D laser scanners or detailed maps. The reason for this is that the level and accuracy of current computer vision and scene understanding algorithms is still far from that of a human being. In this talk I will argue that pushing these limits requires solving a set of core computer vision problems, ranging from low-level tasks (stereo, optical flow) to high-level problems (object detection, 3D scene understanding).
First, I will introduce the KITTI datasets and benchmarks with accurate ground truth for evaluating stereo, optical flow, SLAM and 3D object detection/tracking on realistic video sequences. Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
Second, I will propose a novel generative model for 3D scene understanding that is able to reason jointly about the scene layout (topology and geometry of streets) as well as the location and orientation of objects. By using context from this model, performance of state-of-the-art object detectors in terms of estimating object orientation can be significantly increased.
Finally, I will give an outlook on how prior information in form of large-scale community-driven maps (OpenStreetMap) can be used in the context of 3D scene understanding.
Markov random fields (MRFs) have found widespread use as models of natural image and scene statistics. Despite progress in modeling image properties beyond gradient statistics with high-order cliques, and learning image models from example data, existing MRFs only exhibit a limited ability of actually capturing natural image statistics.
In this talk I will present recent work that investigates this limitation of previous filter-based MRF models, including Fields of Experts (FoEs). We found that these limitations are due to inadequacies in the leaning procedure and suggest various modifications to address them. These "secrets of FoE learning" allow training more suitable potential functions, whose shape approaches that of a Dirac-delta function, as well as models with larger and more filters.
Our experiments not only indicate a substantial improvement of the models' ability to capture relevant statistical properties of natural images, but also demonstrate a significant performance increase in a denoising application to levels previously unattained by generative approaches. This is joint work with Qi Gao.
The great majority of object analysis methods are based on visual object properties - objects are categorized according to how they appear in images. Visual appearance is measured in terms of image features (e.g., SIFTs) extracted from images or video. However, besides appearance, objects also have many properties that can be of interest, e.g., for a robot who wants to employ them in activities: Temperature, weight, surface softness, and also the functionalities or affordances of the object, i.e., how it is intended to be used. One example, recently addressed in the vision community, are chairs. Chairs can look vastly different, but have one thing in common: they afford sitting. At the Computer Vision and Active Perception Lab at KTH, we study the problem of inferring non-observable object properties in a number of ways. In this presentation I will describe some of this work.
Shape analysis and modeling of 2D and 3D objects has important applications in many branches of science and engineering. The general goals in shape analysis include: derivation of efficient shape metrics, computation of shape templates, representation of dominant shape variability in a shape class, and development of probability models that characterize shape variation within and across classes. While past work on shape analysis is dominated by point representations -- finite sets of ordered or triangulated points on objects' boundaries -- the emphasis has lately shifted to continuous formulations.
The shape analysis of parametrized curves and surfaces introduces an additional shape invariance, the re-parametrization group, in additional to the standard invariants of rigid motions and global scales. Treating re-parametrization as a tool for registration of points across objects, we incorporate this group in shape analysis, in the same way orientation is handled in Procrustes analysis. For shape analysis of parametrized curves, I will describe an elastic Riemannian metric and a mathematical representation, called square-root-velocity-function (SRVF), that allows optimal registration and analysis using simple tools.
This framework provides proper metrics, geodesics, and sample statistics of shapes. These sample statistics are further useful in statistical modeling of shapes in different shape classes. Then, I will describe some preliminary extensions of these ideas to shape analysis of parametrized surfaces, I will demonstrate these ideas using applications from medical image analysis, protein structure analysis, 3D face recognition, and human activity recognition in videos.
Abstract: We can modify the optical properties of surfaces by “coating” them with a micron-thin membrane supported by an elastomeric gel. Using an opaque, matte membrane, we can make reflected light micrographs with a distinctive SEM-like appearance. These have modest magnification (e.g., 50X), but they reveal fine surface details not normally seen with an optical microscope.
The system, which we call “GelSight,” removes optical complexities such as specular reflection, albedo, and subsurface scattering, and isolates the shading information that signals 3D shape. One can then see the topography of optically challenging subjects like sandpaper, machined metal, and living human skin. In addition, one can capture 3D surface geometry through photometric stereo. This leads to a non-destructive contact-based optical profilometer that is simple, fast, and compact.
Human can easily see 3D shape from single 2D images, exploiting multiple kinds of information. This has given rise to multiple subfields (in both human vision and computer vision) devoted to the study of shape-from-shading, shape-from-texture, shape-from-contours, and so on.
The proposed algorithms for each type of shape-from-x remain specialized and fragile (in contrast with the flexibility and robustness of human vision). Recent work in graphics and psychophysics has demonstrated the importance of local orientation structure in conveying 3D shape. This information is fairly stable and reliable, even when a given shape is rendered in multiple styles (including non-photorealistic styles such as line drawings.)
We have developed an exemplar-based system (which we call Shape Collage) that learns to associate image patches with corresponding 3D shape patches. We train it with synthetic images of “blobby” objects rendered in various ways, including solid texture, Phong shading, and line drawings. Given a new image, it finds the best candidate scene patches and assembles them into a coherent interpretation of the object shape.
Our system is the first that can retrieve the shape of naturalistic objects from line drawings. The same system, without modification, works for shape-from-texture and can also get shape from shading, even with non-Lambertian surfaces. Thus disparate types of image information can be processed by a single mechanism to extract 3D shape. Collaborative work with Forrester Cole, Phillip Isola, Fredo Durand, and William Freeman.