Header logo is


2017


no image
Avoiding Discrimination through Causal Reasoning

Kilbertus, N., Rojas-Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., Schölkopf, B.

Advances in Neural Information Processing Systems 30, pages: 656-666, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

2017


link (url) [BibTex]


no image
Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

Locatello, F., Tschannen, M., Rätsch, G., Jaggi, M.

Advances in Neural Information Processing Systems 30, pages: 773-784, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
AdaGAN: Boosting Generative Models

Tolstikhin, I., Gelly, S., Bousquet, O., Simon-Gabriel, C. J., Schölkopf, B.

Advances in Neural Information Processing Systems 30, pages: 5430-5439, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


Thumb xl larsnips
The Numerics of GANs

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

Abstract
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Safe Adaptive Importance Sampling

Stich, S. U., Raj, A., Jaggi, M.

Advances in Neural Information Processing Systems 30, pages: 4384-4394, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl amd intentiongan
Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets

Hausman, K., Chebotar, Y., Schaal, S., Sukhatme, G., Lim, J.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

am

pdf video [BibTex]

pdf video [BibTex]


no image
ConvWave: Searching for Gravitational Waves with Fully Convolutional Neural Nets

Gebhard, T., Kilbertus, N., Parascandolo, G., Harry, I., Schölkopf, B.

Workshop on Deep Learning for Physical Sciences (DLPS) at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
From Parity to Preference-based Notions of Fairness in Classification

Zafar, M. B., Valera, I., Gomez Rodriguez, M., Gummadi, K., Weller, A.

Advances in Neural Information Processing Systems 30, pages: 229-239, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., 31st Annual Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl fig1
Active colloidal propulsion over a crystalline surface

Choudhury, U., Straube, A., Fischer, P., Gibbs, J., Höfling, F.

New Journal of Physics, 19, pages: 125010, December 2017 (article)

Abstract
We study both experimentally and theoretically the dynamics of chemically self-propelled Janus colloids moving atop a two-dimensional crystalline surface. The surface is a hexagonally close-packed monolayer of colloidal particles of the same size as the mobile one. The dynamics of the self-propelled colloid reflects the competition between hindered diffusion due to the periodic surface and enhanced diffusion due to active motion. Which contribution dominates depends on the propulsion strength, which can be systematically tuned by changing the concentration of a chemical fuel. The mean-square displacements obtained from the experiment exhibit enhanced diffusion at long lag times. Our experimental data are consistent with a Langevin model for the effectively two-dimensional translational motion of an active Brownian particle in a periodic potential, combining the confining effects of gravity and the crystalline surface with the free rotational diffusion of the colloid. Approximate analytical predictions are made for the mean-square displacement describing the crossover from free Brownian motion at short times to active diffusion at long times. The results are in semi-quantitative agreement with numerical results of a refined Langevin model that treats translational and rotational degrees of freedom on the same footing.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl fig toyex lqr1kernel 1
On the Design of LQR Kernels for Efficient Controller Learning

Marco, A., Hennig, P., Schaal, S., Trimpe, S.

Proceedings of the 56th IEEE Annual Conference on Decision and Control (CDC), pages: 5193-5200, IEEE, IEEE Conference on Decision and Control, December 2017 (conference)

Abstract
Finding optimal feedback controllers for nonlinear dynamic systems from data is hard. Recently, Bayesian optimization (BO) has been proposed as a powerful framework for direct controller tuning from experimental trials. For selecting the next query point and finding the global optimum, BO relies on a probabilistic description of the latent objective function, typically a Gaussian process (GP). As is shown herein, GPs with a common kernel choice can, however, lead to poor learning outcomes on standard quadratic control problems. For a first-order system, we construct two kernels that specifically leverage the structure of the well-known Linear Quadratic Regulator (LQR), yet retain the flexibility of Bayesian nonparametric learning. Simulations of uncertain linear and nonlinear systems demonstrate that the LQR kernels yield superior learning performance.

am ics pn

arXiv PDF On the Design of LQR Kernels for Efficient Controller Learning - CDC presentation DOI Project Page [BibTex]

arXiv PDF On the Design of LQR Kernels for Efficient Controller Learning - CDC presentation DOI Project Page [BibTex]


Thumb xl screenshot 2018 5 9 swimming back and forth using planar flagellar propulsion at low reynolds numbers   khalil   2018   adv ...
Swimming Back and Forth Using Planar Flagellar Propulsion at Low Reynolds Numbers

Khalil, I. S. M., Tabak, A. F., Hamed, Y., Mitwally, M. E., Tawakol, M., Klingner, A., Sitti, M.

Advanced Science, 5(2):1700461, December 2017 (article)

Abstract
Abstract Peritrichously flagellated Escherichia coli swim back and forth by wrapping their flagella together in a helical bundle. However, other monotrichous bacteria cannot swim back and forth with a single flagellum and planar wave propagation. Quantifying this observation, a magnetically driven soft two‐tailed microrobot capable of reversing its swimming direction without making a U‐turn trajectory or actively modifying the direction of wave propagation is designed and developed. The microrobot contains magnetic microparticles within the polymer matrix of its head and consists of two collinear, unequal, and opposite ultrathin tails. It is driven and steered using a uniform magnetic field along the direction of motion with a sinusoidally varying orthogonal component. Distinct reversal frequencies that enable selective and independent excitation of the first or the second tail of the microrobot based on their tail length ratio are found. While the first tail provides a propulsive force below one of the reversal frequencies, the second is almost passive, and the net propulsive force achieves flagellated motion along one direction. On the other hand, the second tail achieves flagellated propulsion along the opposite direction above the reversal frequency.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 41315 2017 39 fig3 html
A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots

Turan, M., Shabbir, J., Araujo, H., Konukoglu, E., Sitti, M.

International Journal of Intelligent Robotics and Applications, 1(4):442-450, December 2017 (article)

Abstract
A reliable, real time localization functionality is crutial for actively controlled capsule endoscopy robots, which are an emerging, minimally invasive diagnostic and therapeutic technology for the gastrointestinal (GI) tract. In this study, we extend the success of deep learning approaches from various research fields to the problem of sensor fusion for endoscopic capsule robots. We propose a multi-sensor fusion based localization approach which combines endoscopic camera information and magnetic sensor based localization information. The results performed on real pig stomach dataset show that our method achieves sub-millimeter precision for both translational and rotational movements.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Discriminative k-shot learning using probabilistic models

Bauer*, M., Rojas-Carulla*, M., Świątkowski, J. B., Schölkopf, B., Turner, R. E.

Second Workshop on Bayesian Deep Learning at the 31st Conference on Neural Information Processing Systems , December 2017, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Evaluation of high-fidelity simulation as a training tool in transoral robotic surgery

Bur, A. M., Gomez, E. D., Newman, J. G., Weinstein, G. S., Bert W. O’Malley, J., Rassekh, C. H., Kuchenbecker, K. J.

Laryngoscope, 127(12):2790-2795, December 2017 (article)

hi

DOI [BibTex]

DOI [BibTex]


no image
Learning Robust Video Synchronization without Annotations

Wieschollek, P., Freeman, I., Lensch, H. P. A.

16th IEEE International Conference on Machine Learning and Applications (ICMLA), pages: 92 - 100, (Editors: X. Chen, B. Luo, F. Luo, V. Palade, and M. A. Wani), IEEE, December 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Optimizing human learning

Tabibian, B., Upadhyay, U., De, A., Zarezade, A., Schölkopf, B., Gomez Rodriguez, M.

Workshop on Teaching Machines, Robots, and Humans at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl robot legos
Interactive Perception: Leveraging Action in Perception and Perception in Action

Bohg, J., Hausman, K., Sankaran, B., Brock, O., Kragic, D., Schaal, S., Sukhatme, G.

IEEE Transactions on Robotics, 33, pages: 1273-1291, December 2017 (article)

Abstract
Recent approaches in robotics follow the insight that perception is facilitated by interactivity with the environment. These approaches are subsumed under the term of Interactive Perception (IP). We argue that IP provides the following benefits: (i) any type of forceful interaction with the environment creates a new type of informative sensory signal that would otherwise not be present and (ii) any prior knowledge about the nature of the interaction supports the interpretation of the signal. This is facilitated by knowledge of the regularity in the combined space of sensory information and action parameters. The goal of this survey is to postulate this as a principle and collect evidence in support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of Interactive Perception. We close this survey by discussing the remaining open questions. Thereby, we hope to define a field and inspire future work.

am

arXiv DOI Project Page [BibTex]

arXiv DOI Project Page [BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Workshop on Prioritising Online Content at the 31st Conference on Neural Information Processing Systems, December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl f2.large
Biohybrid actuators for robotics: A review of devices actuated by living cells

Ricotti, L., Trimmer, B., Feinberg, A. W., Raman, R., Parker, K. K., Bashir, R., Sitti, M., Martel, S., Dario, P., Menciassi, A.

Science Robotics, 2(12), Science Robotics, November 2017 (article)

Abstract
Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl fig1
Wireless Acoustic-Surface Actuators for Miniaturized Endoscopes

Qiu, T., Adams, F., Palagi, S., Melde, K., Mark, A. G., Wetterauer, U., Miernik, A., Fischer, P.

ACS Applied Materials & Interfaces, 9(49):42536 - 42543, November 2017 (article)

Abstract
Endoscopy enables minimally invasive procedures in many medical fields, such as urology. However, current endoscopes are normally cable-driven, which limits their dexterity and makes them hard to miniaturize. Indeed current urological endoscopes have an outer diameter of about 3 mm and still only possess one bending degree of freedom. In this paper, we report a novel wireless actuation mechanism that increases the dexterity and that permits the miniaturization of a urological endoscope. The novel actuator consists of thin active surfaces that can be readily attached to any device and are wirelessly powered by ultrasound. The surfaces consist of two-dimensional arrays of micro-bubbles, which oscillate under ultrasound excitation and thereby generate an acoustic streaming force. Bubbles of different sizes are addressed by their unique resonance frequency, thus multiple degrees of freedom can readily be incorporated. Two active miniaturized devices (with a side length of around 1 mm) are demonstrated: a miniaturized mechanical arm that realizes two degrees of freedom, and a flexible endoscope prototype equipped with a camera at the tip. With the flexible endoscope, an active endoscopic examination is successfully performed in a rabbit bladder. This results show the potential medical applicability of surface actuators wirelessly powered by ultrasound penetrating through biological tissues.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals

Tanneberg, D., Peters, J., Rueckert, E.

Proceedings of the 1st Annual Conference on Robot Learning (CoRL), pages: 167-174, Proceedings of Machine Learning Research, (Editors: Sergey Levine, Vincent Vanhoucke and Ken Goldberg), PMLR, November 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Behind Distribution Shift: Mining Driving Forces of Changes and Causal Arrows

Huang, B., Zhang, K., Zhang, J., Sanchez-Romero, R., Glymour, C., Schölkopf, B.

IEEE 17th International Conference on Data Mining (ICDM), pages: 913-918, (Editors: Vijay Raghavan,Srinivas Aluru, George Karypis, Lucio Miele and Xindong Wu), November 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl teaser
Optimizing Long-term Predictions for Model-based Policy Search

Doerr, A., Daniel, C., Nguyen-Tuong, D., Marco, A., Schaal, S., Toussaint, M., Trimpe, S.

Proceedings of Machine Learning Research, 78, pages: 227-238, (Editors: Sergey Levine and Vincent Vanhoucke and Ken Goldberg), 1st Annual Conference on Robot Learning, November 2017 (conference) Accepted

Abstract
We propose a novel long-term optimization criterion to improve the robustness of model-based reinforcement learning in real-world scenarios. Learning a dynamics model to derive a solution promises much greater data-efficiency and reusability compared to model-free alternatives. In practice, however, modelbased RL suffers from various imperfections such as noisy input and output data, delays and unmeasured (latent) states. To achieve higher resilience against such effects, we propose to optimize a generative long-term prediction model directly with respect to the likelihood of observed trajectories as opposed to the common approach of optimizing a dynamics model for one-step-ahead predictions. We evaluate the proposed method on several artificial and real-world benchmark problems and compare it to PILCO, a model-based RL framework, in experiments on a manipulation robot. The results show that the proposed method is competitive compared to state-of-the-art model learning methods. In contrast to these more involved models, our model can directly be employed for policy search and outperforms a baseline method in the robot experiment.

am ics

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Efficient Online Adaptation with Stochastic Recurrent Neural Networks

Tanneberg, D., Peters, J., Rueckert, E.

IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pages: 198-204, IEEE, November 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl probls sketch n3 0 ei0
Probabilistic Line Searches for Stochastic Optimization

Mahsereci, M., Hennig, P.

Journal of Machine Learning Research, 18(119):1-59, November 2017 (article)

pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl flamewebteaserwide
Learning a model of facial shape and expression from 4D scans

Li, T., Bolkart, T., Black, M. J., Li, H., Romero, J.

ACM Transactions on Graphics, 36(6):194:1-194:17, November 2017, Two first authors contributed equally (article)

Abstract
The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression from 4D face sequences in the D3DFACS dataset along with additional 4D sequences.We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).

ps

data/model video paper supplemental [BibTex]

data/model video paper supplemental [BibTex]


Thumb xl qg net rev
Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning

Li, W., Bohg, J., Fritz, M.

arXiv, November 2017 (article) Submitted

Abstract
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.

am

arXiv [BibTex]


Thumb xl molbert
Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study

Mölbert, S. C., Thaler, A., Streuber, S., Black, M. J., Karnath, H., Zipfel, S., Mohler, B., Giel, K. E.

European Eating Disorders Review, 25(6):607-612, November 2017 (article)

Abstract
This study uses novel biometric figure rating scales (FRS) spanning body mass index (BMI) 13.8 to 32.2 kg/m2 and BMI 18 to 42 kg/m2. The aims of the study were (i) to compare FRS body weight dissatisfaction and perceptual distortion of women with anorexia nervosa (AN) to a community sample; (ii) how FRS parameters are associated with questionnaire body dissatisfaction, eating disorder symptoms and appearance comparison habits; and (iii) whether the weight spectrum of the FRS matters. Women with AN (n = 24) and a community sample of women (n = 104) selected their current and ideal body on the FRS and completed additional questionnaires. Women with AN accurately picked the body that aligned best with their actual weight in both FRS. Controls underestimated their BMI in the FRS 14–32 and were accurate in the FRS 18–42. In both FRS, women with AN desired a body close to their actual BMI and controls desired a thinner body. Our observations suggest that body image disturbance in AN is unlikely to be characterized by a visual perceptual disturbance, but rather by an idealization of underweight in conjunction with high body dissatisfaction. The weight spectrum of FRS can influence the accuracy of BMI estimation.

ps

publisher DOI [BibTex]


no image
Learning inverse dynamics models in O(n) time with LSTM networks

Rueckert, E., Nakatenus, M., Tosatto, S., Peters, J.

IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pages: 811-816, IEEE, November 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl manoteaser
Embodied Hands: Modeling and Capturing Hands and Bodies Together

Romero, J., Tzionas, D., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):245:1-245:17, 245:1–245:17, ACM, November 2017 (article)

Abstract
Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.

ps

website youtube paper suppl video link (url) DOI Project Page [BibTex]

website youtube paper suppl video link (url) DOI Project Page [BibTex]


no image
A Comparison of Distance Measures for Learning Nonparametric Motor Skill Libraries

Stark, S., Peters, J., Rueckert, E.

IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pages: 624-630, IEEE, November 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Simulation of the underactuated Sake Robotics Gripper in V-REP

Thiem, S., Stark, S., Tanneberg, D., Peters, J., Rueckert, E.

Workshop at the International Conference on Humanoid Robots (HUMANOIDS), November 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
End-to-End Learning for Image Burst Deblurring

Wieschollek, P., Schölkopf, B., Lensch, H. P. A., Hirsch, M.

Computer Vision - ACCV 2016 - 13th Asian Conference on Computer Vision, 10114, pages: 35-51, Image Processing, Computer Vision, Pattern Recognition, and Graphics, (Editors: Lai, S.-H., Lepetit, V., Nishino, K., and Sato, Y. ), Springer, November 2017 (conference)

ei

[BibTex]

[BibTex]


no image
Active Incremental Learning of Robot Movement Primitives

Maeda, G., Ewerton, M., Osa, T., Busch, B., Peters, J.

Proceedings of the 1st Annual Conference on Robot Learning (CoRL), 78, pages: 37-46, Proceedings of Machine Learning Research, (Editors: Sergey Levine, Vincent Vanhoucke and Ken Goldberg), PMLR, November 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teasercrop
A Generative Model of People in Clothing

Lassner, C., Pons-Moll, G., Gehler, P. V.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
We present the first image-based generative model of people in clothing in a full-body setting. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.

ps

link (url) [BibTex]

link (url) [BibTex]


Thumb xl website teaser
Semantic Video CNNs through Representation Warping

Gadde, R., Jampani, V., Gehler, P. V.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings) Accepted

Abstract
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very lit- tle extra computational cost. This module is called Net- Warp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network repre- sentations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to- end training. Experiments validate that the proposed ap- proach incurs only little extra computational cost, while im- proving performance, when video streams are available. We achieve new state-of-the-art results on the standard CamVid and Cityscapes benchmark datasets and show reliable im- provements over different baseline networks. Our code and models are available at http://segmentation.is. tue.mpg.de

ps

pdf Supplementary [BibTex]

pdf Supplementary [BibTex]


no image
Online Video Deblurring via Dynamic Temporal Blending Network

Kim, T. H., Lee, K. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4038-4047, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser iccv2017
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat Poster [BibTex]

pdf suppmat Poster [BibTex]


Thumb xl 2016 enhancenet
EnhanceNet: Single Image Super-Resolution through Automated Texture Synthesis

Sajjadi, M. S. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4501-4510, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

Arxiv Project link (url) DOI [BibTex]

Arxiv Project link (url) DOI [BibTex]


no image
Learning Blind Motion Deblurring

Wieschollek, P., Hirsch, M., Schölkopf, B., Lensch, H.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 231-240, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl jonas teaser
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


no image
Personalized Brain-Computer Interface Models for Motor Rehabilitation

Mastakouri, A., Weichwald, S., Ozdenizci, O., Meyer, T., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages: 3024-3029, October 2017 (conference)

ei

ArXiv PDF DOI [BibTex]

ArXiv PDF DOI [BibTex]


Thumb xl gernot teaser
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page [BibTex]

pdf Video 1 Video 2 Project Page [BibTex]


no image
Improving performance of linear field generation with multi-coil setup by optimizing coils position

Aghaeifar, A., Loktyushin, A., Eschelbach, M., Scheffler, K.

Magnetic Resonance Materials in Physics, Biology and Medicine, 30(Supplement 1):S259, 34th Annual Scientific Meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), October 2017 (poster)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl cover tro paper
An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking

Ahmad, A., Lawless, G., Lima, P.

IEEE Transactions on Robotics (T-RO), 33, pages: 1184 - 1199, October 2017 (article)

Abstract
In this article we present a unified approach for multi-robot cooperative simultaneous localization and object tracking based on particle filters. Our approach is scalable with respect to the number of robots in the team. We introduce a method that reduces, from an exponential to a linear growth, the space and computation time requirements with respect to the number of robots in order to maintain a given level of accuracy in the full state estimation. Our method requires no increase in the number of particles with respect to the number of robots. However, in our method each particle represents a full state hypothesis, leading to the linear dependency on the number of robots of both space and time complexity. The derivation of the algorithm implementing our approach from a standard particle filter algorithm and its complexity analysis are presented. Through an extensive set of simulation experiments on a large number of randomized datasets, we demonstrate the correctness and efficacy of our approach. Through real robot experiments on a standardized open dataset of a team of four soccer playing robots tracking a ball, we evaluate our method's estimation accuracy with respect to the ground truth values. Through comparisons with other methods based on i) nonlinear least squares minimization and ii) joint extended Kalman filter, we further highlight our method's advantages. Finally, we also present a robustness test for our approach by evaluating it under scenarios of communication and vision failure in teammate robots.

ps

Published Version link (url) DOI Project Page [BibTex]


no image
Generalized exploration in policy search

van Hoof, H., Tanneberg, D., Peters, J.

Machine Learning, 106(9-10):1705-1724 , (Editors: Kurt Driessens, Dragi Kocev, Marko Robnik‐Sikonja, and Myra Spiliopoulou), October 2017, Special Issue of the ECML PKDD 2017 Journal Track (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Multi-frame blind image deconvolution through split frequency - phase recovery

Gauci, A., Abela, J., Cachia, E., Hirsch, M., ZarbAdami, K.

Proc. SPIE 10225, Eighth International Conference on Graphic and Image Processing (ICGIP 2016), pages: 1022511, (Editors: Yulin Wang, Tuan D. Pham, Vit Vozenilek, David Zhang, Yi Xie), October 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Editorial for the Special Issue on Microdevices and Microsystems for Cell Manipulation

Hu, W., Ohta, A. T.

8, Multidisciplinary Digital Publishing Institute, September 2017 (misc)

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl 7 byung fig
Multifunctional Bacteria-Driven Microswimmers for Targeted Active Drug Delivery

Park, B., Zhuang, J., Yasa, O., Sitti, M.

ACS Nano, 11(9):8910-8923, September 2017, PMID: 28873304 (article)

Abstract
High-performance, multifunctional bacteria-driven microswimmers are introduced using an optimized design and fabrication method for targeted drug delivery applications. These microswimmers are made of mostly single Escherichia coli bacterium attached to the surface of drug-loaded polyelectrolyte multilayer (PEM) microparticles with embedded magnetic nanoparticles. The PEM drug carriers are 1 μm in diameter and are intentionally fabricated with a more viscoelastic material than the particles previously studied in the literature. The resulting stochastic microswimmers are able to swim at mean speeds of up to 22.5 μm/s. They can be guided and targeted to specific cells, because they exhibit biased and directional motion under a chemoattractant gradient and a magnetic field, respectively. Moreover, we demonstrate the microswimmers delivering doxorubicin anticancer drug molecules, encapsulated in the polyelectrolyte multilayers, to 4T1 breast cancer cells under magnetic guidance in vitro. The results reveal the feasibility of using these active multifunctional bacteria-driven microswimmers to perform targeted drug delivery with significantly enhanced drug transfer, when compared with the passive PEM microparticles.

pi

link (url) DOI Project Page Project Page [BibTex]


Thumb xl admi201770108 gra 0001 m
Active Acoustic Surfaces Enable the Propulsion of a Wireless Robot

Qiu, T., Palagi, S., Mark, A. G., Melde, K., Adams, F., Fischer, P.

Advanced Materials Interfaces, 4(21):1700933, September 2017 (article)

Abstract
A major challenge that prevents the miniaturization of mechanically actuated systems is the lack of suitable methods that permit the efficient transfer of power to small scales. Acoustic energy holds great potential, as it is wireless, penetrates deep into biological tissues, and the mechanical vibrations can be directly converted into directional forces. Recently, active acoustic surfaces are developed that consist of 2D arrays of microcavities holding microbubbles that can be excited with an external acoustic field. At resonance, the surfaces give rise to acoustic streaming and thus provide a highly directional propulsive force. Here, this study advances these wireless surface actuators by studying their force output as the size of the bubble-array is increased. In particular, a general method is reported to dramatically improve the propulsive force, demonstrating that the surface actuators are actually able to propel centimeter-scale devices. To prove the flexibility of the functional surfaces as wireless ready-to-attach actuator, a mobile mini-robot capable of propulsion in water along multiple directions is presented. This work paves the way toward effectively exploiting acoustic surfaces as a novel wireless actuation scheme at small scales.

pf

link (url) DOI [BibTex]