Magnetic fields and light can be used to assemble, manipulate, and heat nanoparticles (NPs) and to remotely actuate polymer composites. Simple soft robots will be presented, where incorporation of magnetic and plasmonic NPs makes them responsive to magnetic fields and light. Application of magnetic fields to dispersions of magnetic NPs drives their assembly into chains. Dipolar coupling within the chains is a source of magnetic anisotropy, and chains of magnetic NPs embedded in a polymer matrix can be used to program the response of soft robots, while still using simple architectures. Wavelength-selective photothermal triggering of shape recovery in shape memory polymers with embedded Au nanospheres and nanorods can be used to remotely drive sequential processes. Combining magnetic actuation and photothermal heating enables remote configuration, locking, unlocking, and reconfiguration of soft robots, thus increasing their capabilities. Composite and multifunctional NPs are of interest for expanding the properties and applications of NPs. Silica shells are desirable for facilitating functionalization with silanes and enhancing the stability of NPs. Methods for depositing thin silica shells with controlled morphologies onto Au nanorods and CdSe/CdS core/shell quantum dot nanorods will be presented. Silica deposition can also be accompanied by etching and breakage of the core NPs. Assembly of Fe3O4 NPs onto silica-overcoated Au nanorods allows for magnetic manipulation, while retaining the surface plasmon resonance.
Organizers: Metin Sitti
Robots today rely on rigid components and electric motors based on metal and magnets, making them heavy, unsafe near humans, expensive and ill-suited for unpredictable environments. Nature, in contrast, makes extensive use of soft materials and has produced organisms that drastically outperform robots in terms of agility, dexterity, and adaptability. The Keplinger Lab aims to fundamentally challenge current limitations of robotic hardware, using an interdisciplinary approach that synergizes concepts from soft matter physics and chemistry with advanced engineering technologies to introduce robotic materials – material systems that integrate actuation, sensing and even computation – for a new generation of intelligent systems. This talk gives an overview of fundamental research questions that inspire current and future research directions. One major theme of research is the development of new classes of actuators – a key component of all robotic systems – that replicate the sweeping success of biological muscle, a masterpiece of evolution featuring astonishing all-around actuation performance, the ability to self-heal after damage, and seamless integration with sensing. A second theme of research are functional polymers with unusual combinations of properties, such as electrical conductivity paired with stretchability, transparency, biocompatibility and the ability to self-healing from mechanical and electrical damage. A third theme of research is the discovery of new energy capture principles that can provide power to intelligent autonomous systems, as well as – on larger scales – enable sustainable solutions for the use of waste heat from industrial processes or the use of untapped sources of renewable energy, such as ocean waves.
The growing scale of image and video datasets in vision makes labeling and annotation of such datasets, for training of recognition models, difficult and time consuming. Further, richer models often require richer labelings of the data, that are typically even more difficult to obtain. In this talk I will focus on two models that make use of different forms of supervision for two different vision tasks.
In the first part of this talk I will focus on object detection. The appearance of an object changes profoundly with pose, camera view and interactions of the object with other objects in the scene. This makes it challenging to learn detectors based on an object-level labels (e.g., “car”). We postulate that having a richer set of labelings (at different levels of granularity) for an object, including finer-grained sub-categories, consistent in appearance and view, and higher-order composites – contextual groupings of objects consistent in their spatial layout and appearance, can significantly alleviate these problems. However, obtaining such a rich set of annotations, including annotation of an exponentially growing set of object groupings, is infeasible. To this end, we propose a weakly-supervised framework for object detection where we discover subcategories and the composites automatically with only traditional object-level category labels as input.
In the second part of the talk I will focus on the framework for large scale image set and video summarization. Starting from the intuition that the characteristics of the two media types are different but complementary, we develop a fast and easily-parallelizable approach for creating not only video summaries but also novel structural summaries of events in the form of the storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in the form of a branching directed network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frame, thereby treating consumer image as essentially a form of weak-supervision. The reconstruction of storyline graphs on the other hand is formulated as inference of the sparse time-varying directed graphs from a set of photo streams with assistance of consumer videos.
Time permitting I will also talk about a few other recent project highlights.
Abstract: I will present a general framework for modelling and recovering 3D shape and pose using subdivision surfaces. To demonstrate this frameworks generality, I will show how to recover both a personalized rigged hand model from a sequence of depth images and a blend shape model of dolphin pose from a collection of 2D dolphin images. The core requirement is the formulation of a generative model in which the control vertices of a smooth subdivision surface are parameterized (e.g. with joint angles or blend weights) by a differentiable deformation function. The energy function that falls out of measuring the deviation between the surface and the observed data is also differentiable and can be minimized through standard, albeit tricky, gradient based non-linear optimization from a reasonable initial guess. The latter can often be obtained using machine learning methods when manual intervention is undesirable. Satisfyingly, the "tricks" involved in the former are elegant and widen the applicability of these methods.
In order to avoid an expensive manual labeling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visual similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in these videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.
Facebook serves close to a billion people every day, who are only able to consume a small subset of the information available to them. In this talk I will give some examples of how machine learning is used to personalize people’s Facebook experience. I will also present some data science experiments with fairly counter-intuitive results.
In this talk I will discuss two related problems in 3D reconstruction: (i) recovering the 3D shape of a temporally varying non-rigid 3D surface given a single video sequence and (ii) reconstructing different instances of the same object class category given a large collection of images from that category. In both cases we extract dense 3D shape information by analysing shape variation -- in one case of the same object instance over time and in the other across different instances of objects that belong to the same class.
First I will discuss the problem of dense capture of 3D non-rigid surfaces from a monocular video sequence. We take a purely model-free approach where no strong assumptions are made about the object we are looking at or the way it deforms. We apply low rank and spatial smoothness priors to obtain dense non-rigid models using a variational approach.
Second I will describe our recent approach to populating the Pascal VOC dataset with dense, per-object 3D reconstructions, bootstrapped from class labels, ground truth figure-ground segmentations and a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion, then reconstructs objects shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions.
Stochastic differential equations (SDEs) arise naturally as descriptions of continuous time dynamical systems. My talk addresses the problem of inferring the dynamical state and parameters of such systems from observations taken at discrete times. I will discuss the application of approximate inference methods such as the variational method and expectation propagation and show how higher dimensional systems can be treated by a mean field approximation. In the second part of my talk I will discuss the nonparametric estimation of the drift (i.e. the deterministic part of the ‘force’ which governs the dynamics) as a function of the state using Gaussian process approaches.
Even though many challenges remain unsolved, in recent years computer graphics algorithms to render photo-realistic imagery have seen tremendous progress. An important prerequisite for high-quality renderings is the availability of good models of the scenes to be rendered, namely models of shape, motion and appearance. Unfortunately, the technology to create such models has not kept pace with the technology to render the imagery. In fact, we observe a content creation bottleneck, as it often takes man months of tedious manual work by a animation artists to craft models of moving virtual scenes.
To overcome this limitation, the research community has been developing techniques to capture models of dynamic scenes from real world examples, for instance methods that rely on footage recorded with cameras or other sensors. One example are performance capture methods that measure detailed dynamic surface models, for example of actors or an actor's face, from multi-view video and without markers in the scene. Even though such 4D capture methods made big strides ahead, they are still at an early stage of their development. Their application is limited to scenes of moderate complexity in controlled environments, reconstructed detail is limited, and captured content cannot be easily modified, to name only a few restrictions.
In this talk, I will elaborate on some ideas on how to go beyond this limited scope of 4D reconstruction, and show some results from our recent work. For instance, I will show how we can capture more complex scenes with many objects or subjects in close interaction, as well as very challenging scenes of a smaller scale, such a hand motion. The talk will also show how we can capitalize on more sophisticated light transport models and inverse rendering to enable high-quality reconstruction in much more uncontrolled scenes, eventually also outdoors, and with very few cameras. I will also demonstrate how to represent captured scenes such that they can be conveniently modified. If time allows, the talk will cover some of our recent ideas on how to perform advanced edits of videos (e.g. removing or modifying dynamic objects in scenes) by exploiting reconstructed 4D models, as well as robustly found inter- and intra-frame correspondences.
Organizers: Gerard Pons-Moll
The recent theory of compressive sensing predicts that (approximately) sparse vectors can be recovered from vastly incomplete linear measurements using efficient algorithms. This principle has a large number of potential applications in signal and image processing, machine learning and more. Optimal measurement matrices in this context known so far are based on randomness. Recovery algorithms include convex optimization approaches (l1-minimization) as well as greedy methods. Gaussian and Bernoulli random matrices are provably optimal in the sense that the smallest possible number of samples is required. Such matrices, however, are of limited practical interest because of the lack of any structure. In fact, applications demand for certain structure so that there is only limited freedom to inject randomness. We present recovery results for various structured random matrices including random partial Fourier matrices and partial random circulant matrices. We will also review recent extensions of compressive sensing for recovering matrices of low rank from incomplete information via efficient algorithms such as nuclear norm minimization. This principle has recently found applications for phaseless estimation, i.e., in situations where only the magnitude of measurements is available. Another extension considers the recovery of low rank tensors (multi-dimensional arrays) from incomplete linear information. Several obstacles arise when passing from matrices and tensors such as the lack of a singular value decomposition which shares all the nice properties of the matrix singular value decomposition. Although only partial theoretical results are available, we discuss algorithmic approaches for this problem.
Organizers: Michel Besserve
A goal in virtual reality is for the user to experience a synthetic environment as if it were real. Engagement with virtual actors is a big part of the sensory context, thus getting the people "right" is critical for success. Size, shape, gender, ethnicity, clothing, color, texture, movement, among other attributes must be layered and nuanced to provide an accurate encounter between an actor and a user. In this talk, I discuss the development of digital human models and how they may be improved to obtain the high realism for successful engagement in a virtual world.
Volumetric 3D modeling has attracted a lot of attention in the past. In this talk I will explain how the standard volumetric formulation can be extended to include semantic information by using a convex multi-label formulation. One of the strengths of our formulation is that it allows us to directly account for the expected surface orientations. I will focus on two applications. Firstly, I will introduce a method that allows for joint volumetric reconstruction and class segmentation. This is achieved by taking into account the expected orientations of object classes such as ground and building. Such a joint approach considerably improves the quality of the geometry while at the same time it gives a consistent semantic segmentation. In the second application I will present a method that allows for the reconstruction of challenging objects such as for example glass bottles. The main difficulty with reconstructing such objects are the texture-less, transparent and reflective areas in the input images. We propose to formulate a shape prior based on the locally expected surface orientation to account for the ambiguous input data. Our multi-label approach also directly enables us to segment the object from its surrounding.