Header logo is


2017


Thumb xl website teaser
Semantic Video CNNs through Representation Warping

Gadde, R., Jampani, V., Gehler, P. V.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings) Accepted

Abstract
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very lit- tle extra computational cost. This module is called Net- Warp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network repre- sentations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to- end training. Experiments validate that the proposed ap- proach incurs only little extra computational cost, while im- proving performance, when video streams are available. We achieve new state-of-the-art results on the standard CamVid and Cityscapes benchmark datasets and show reliable im- provements over different baseline networks. Our code and models are available at http://segmentation.is. tue.mpg.de

ps

pdf Supplementary Project Page [BibTex]

2017


pdf Supplementary Project Page [BibTex]


no image
Online Video Deblurring via Dynamic Temporal Blending Network

Kim, T. H., Lee, K. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4038-4047, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser iccv2017
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Thumb xl 2016 enhancenet
EnhanceNet: Single Image Super-Resolution through Automated Texture Synthesis

Sajjadi, M. S. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4501-4510, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

Arxiv Project link (url) DOI [BibTex]

Arxiv Project link (url) DOI [BibTex]


no image
Learning Blind Motion Deblurring

Wieschollek, P., Hirsch, M., Schölkopf, B., Lensch, H.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 231-240, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl jonas teaser
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


no image
Personalized Brain-Computer Interface Models for Motor Rehabilitation

Mastakouri, A., Weichwald, S., Ozdenizci, O., Meyer, T., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages: 3024-3029, October 2017 (conference)

ei

ArXiv PDF DOI Project Page [BibTex]

ArXiv PDF DOI Project Page [BibTex]


Thumb xl gernot teaser
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page Project Page [BibTex]

pdf Video 1 Video 2 Project Page Project Page [BibTex]


no image
Multi-frame blind image deconvolution through split frequency - phase recovery

Gauci, A., Abela, J., Cachia, E., Hirsch, M., ZarbAdami, K.

Proc. SPIE 10225, Eighth International Conference on Graphic and Image Processing (ICGIP 2016), pages: 1022511, (Editors: Yulin Wang, Tuan D. Pham, Vit Vozenilek, David Zhang, Yi Xie), October 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Editorial for the Special Issue on Microdevices and Microsystems for Cell Manipulation

Hu, W., Ohta, A. T.

8, Multidisciplinary Digital Publishing Institute, September 2017 (misc)

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl andreas teaser
Direct Visual Odometry for a Fisheye-Stereo Camera

Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

Abstract
We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
A New Data Source for Inverse Dynamics Learning

Kappler, D., Meier, F., Ratliff, N., Schaal, S.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

am

[BibTex]

[BibTex]


no image
Closing One’s Eyes Affects Amplitude Modulation but Not Frequency Modulation in a Cognitive BCI

Görner, M., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 165-170, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
A Guided Task for Cognitive Brain-Computer Interfaces

Moser, J., Hohmann, M. R., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 326-331, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Bayesian Regression for Artifact Correction in Electroencephalography

Fiebig, K., Jayaram, V., Hesse, T., Blank, A., Peters, J., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 131-136, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Investigating Music Imagery as a Cognitive Paradigm for Low-Cost Brain-Computer Interfaces

Grossberger, L., Hohmann, M. R., Peters, J., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 160-164, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Correlations of Motor Adaptation Learning and Modulation of Resting-State Sensorimotor EEG Activity

Ozdenizci, O., Yalcin, M., Erdogan, A., Patoglu, V., Grosse-Wentrup, M., Cetin, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 384-388, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Weakly-Supervised Localization of Diabetic Retinopathy Lesions in Retinal Fundus Images

Gondal, M. W., Köhler, J. M., Grzeszick, R., Fink, G., Hirsch, M.

IEEE International Conference on Image Processing (ICIP), pages: 2069-2073, September 2017 (conference)

ei

arXiv DOI [BibTex]

arXiv DOI [BibTex]


no image
Assisting the practice of motor skills by humans with a probability distribution over trajectories

Ewerton, M., Maeda, G., Rother, D., Weimar, J., Lotter, L., Kollegger, G., Wiemeyer, J., Peters, J.

In Workshop Human-in-the-loop robotic manipulation: on the influence of the human role at IROS, September 2017 (inproceedings)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
BIMROB – Bidirectional Interaction Between Human and Robot for the Learning of Movements

Kollegger, G., Ewerton, M., Wiemeyer, J., Peters, J.

Proceedings of the 11th International Symposium on Computer Science in Sport (IACSS), (663):151-163, Advances in Intelligent Systems and Computing, (Editors: Lames M., Saupe D. and Wiemeyer J.), Springer International Publishing, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Goal-driven dimensionality reduction for reinforcement learning

Parisi, S., Ramstedt, S., Peters, J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 4634-4639, IEEE, September 2017 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl screen shot 2017 08 01 at 15.41.10
On the relevance of grasp metrics for predicting grasp success

Rubert, C., Kappler, D., Morales, A., Schaal, S., Bohg, J.

In Proceedings of the IEEE/RSJ International Conference of Intelligent Robots and Systems, September 2017 (inproceedings) Accepted

Abstract
We aim to reliably predict whether a grasp on a known object is successful before it is executed in the real world. There is an entire suite of grasp metrics that has already been developed which rely on precisely known contact points between object and hand. However, it remains unclear whether and how they may be combined into a general purpose grasp stability predictor. In this paper, we analyze these questions by leveraging a large scale database of simulated grasps on a wide variety of objects. For each grasp, we compute the value of seven metrics. Each grasp is annotated by human subjects with ground truth stability labels. Given this data set, we train several classification methods to find out whether there is some underlying, non-trivial structure in the data that is difficult to model manually but can be learned. Quantitative and qualitative results show the complexity of the prediction problem. We found that a good prediction performance critically depends on using a combination of metrics as input features. Furthermore, non-parametric and non-linear classifiers best capture the structure in the data.

am

Project Page [BibTex]

Project Page [BibTex]


Thumb xl hassan paper teasere
Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes

Alhaija, H. A., Mustikovela, S. K., Mescheder, L., Geiger, A., Rother, C.

In Proceedings of the British Machine Vision Conference 2017, Proceedings of the British Machine Vision Conference, September 2017 (inproceedings)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. This allows us to create realistic composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D shapes of the target object category. We demonstrate the utility of the proposed approach for training a state-of-the-art high-capacity deep model for semantic instance segmentation. In particular, we consider the task of segmenting car instances on the KITTI dataset which we have annotated with pixel-accurate ground truth. Our experiments demonstrate that models trained on augmented imagery generalize better than those trained on synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Swimming in low reynolds numbers using planar and helical flagellar waves

Khalil, I. S. M., Tabak, A. F., Seif, M. A., Klingner, A., Adel, B., Sitti, M.

In International Conference on Intelligent Robots and Systems (IROS) 2017, pages: 1907-1912, International Conference on Intelligent Robots and Systems, September 2017 (inproceedings)

Abstract
In travelling towards the oviducts, sperm cells undergo transitions between planar to helical flagellar propulsion by a beating tail based on the viscosity of the environment. In this work, we aim to model and mimic this behaviour in low Reynolds number fluids using externally actuated soft robotic sperms. We numerically investigate the effects of transition between planar to helical flagellar propulsion on the swimming characteristics of the robotic sperm using a model based on resistive-force theory to study the role of viscous forces on its flexible tail. Experimental results are obtained using robots that contain magnetic particles within the polymer matrix of its head and an ultra-thin flexible tail. The planar and helical flagellar propulsion are achieved using in-plane and out-of-plane uniform fields with sinusoidally varying components, respectively. We experimentally show that the swimming speed of the robotic sperm increases by a factor of 1.4 (fluid viscosity 5 Pa.s) when it undergoes a controlled transition between planar to helical flagellar propulsion, at relatively low actuation frequencies.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl kenny
Effects of animation retargeting on perceived action outcomes

Kenny, S., Mahmood, N., Honda, C., Black, M. J., Troje, N. F.

Proceedings of the ACM Symposium on Applied Perception (SAP’17), pages: 2:1-2:7, September 2017 (conference)

Abstract
The individual shape of the human body, including the geometry of its articulated structure and the distribution of weight over that structure, influences the kinematics of a person's movements. How sensitive is the visual system to inconsistencies between shape and motion introduced by retargeting motion from one person onto the shape of another? We used optical motion capture to record five pairs of male performers with large differences in body weight, while they pushed, lifted, and threw objects. Based on a set of 67 markers, we estimated both the kinematics of the actions as well as the performer's individual body shape. To obtain consistent and inconsistent stimuli, we created animated avatars by combining the shape and motion estimates from either a single performer or from different performers. In a virtual reality environment, observers rated the perceived weight or thrown distance of the objects. They were also asked to explicitly discriminate between consistent and hybrid stimuli. Observers were unable to accomplish the latter, but hybridization of shape and motion influenced their judgements of action outcome in systematic ways. Inconsistencies between shape and motion were assimilated into an altered perception of the action outcome.

ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Hybrid control trajectory optimization under uncertainty

Pajarinen, J., Kyrki, V., Koval, M., Srinivasa, S., Peters, J., Neumann, G.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 5694-5701, September 2017 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Local Bayesian Optimization of Motor Skills

Akrour, R., Sorokin, D., Peters, J., Neumann, G.

Proceedings of the 34th International Conference on Machine Learning, 70, pages: 41-50, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (conference)

am ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl img01
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings of the 34th International Conference on Machine Learning, 70, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (inproceedings)

Abstract
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

avg

pdf suppmat Project Page arxiv-version Project Page [BibTex]

pdf suppmat Project Page arxiv-version Project Page [BibTex]


no image
Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control

Jaques, N., Gu, S., Bahdanau, D., Hernández-Lobato, J. M., Turner, R. E., Eck, D.

Proceedings of the 34th International Conference on Machine Learning, 70, pages: 1645-1654, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (conference)

ei

Arxiv link (url) Project Page [BibTex]

Arxiv link (url) Project Page [BibTex]


Thumb xl pilqr cover
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning

Chebotar, Y., Hausman, K., Zhang, M., Sukhatme, G., Schaal, S., Levine, S.

Proceedings of the 34th International Conference on Machine Learning, 70, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (conference)

am

pdf video [BibTex]

pdf video [BibTex]


no image
Lost Relatives of the Gumbel Trick

Balog, M., Tripuraneni, N., Ghahramani, Z., Weller, A.

Proceedings of the 34th International Conference on Machine Learning, 70, pages: 371-379, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (conference)

ei

Code link (url) Project Page [BibTex]

Code link (url) Project Page [BibTex]


no image
Approximate Steepest Coordinate Descent

Stich, S., Raj, A., Jaggi, M.

Proceedings of the 34th International Conference on Machine Learning, 70, pages: 3251-3259, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Stiffness Perception during Pinching and Dissection with Teleoperated Haptic Forceps

Ng, C., Zareinia, K., Sun, Q., Kuchenbecker, K. J.

In Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 456-463, Lisbon, Portugal, August 2017 (inproceedings)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Causal Consistency of Structural Equation Models

Rubenstein*, P. K., Weichwald*, S., Bongers, S., Mooij, J. M., Janzing, D., Grosse-Wentrup, M., Schölkopf, B.

Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), (Editors: Gal Elidan, Kristian Kersting, and Alexander T. Ihler), Association for Uncertainty in Artificial Intelligence (AUAI), Conference on Uncertainty in Artificial Intelligence (UAI), August 2017, *equal contribution (conference)

ei

Arxiv PDF link (url) [BibTex]

Arxiv PDF link (url) [BibTex]


no image
Causal Discovery from Temporally Aggregated Time Series

Gong, M., Zhang, K., Schölkopf, B., Glymour, C., Tao, D.

Proceedings Conference on Uncertainty in Artificial Intelligence (UAI) 2017, pages: ID 269, (Editors: Gal Elidan, Kristian Kersting, and Alexander T. Ihler), Association for Uncertainty in Artificial Intelligence (AUAI), Conference on Uncertainty in Artificial Intelligence (UAI), August 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser
Coupling Adaptive Batch Sizes with Learning Rates

Balles, L., Romero, J., Hennig, P.

In Proceedings Conference on Uncertainty in Artificial Intelligence (UAI) 2017, pages: 410-419, (Editors: Gal Elidan and Kristian Kersting), Association for Uncertainty in Artificial Intelligence (AUAI), Conference on Uncertainty in Artificial Intelligence (UAI), August 2017 (inproceedings)

Abstract
Mini-batch stochastic gradient descent and variants thereof have become standard for large-scale empirical risk minimization like the training of neural networks. These methods are usually used with a constant batch size chosen by simple empirical inspection. The batch size significantly influences the behavior of the stochastic optimization algorithm, though, since it determines the variance of the gradient estimates. This variance also changes over the optimization process; when using a constant batch size, stability and convergence is thus often enforced by means of a (manually tuned) decreasing learning rate schedule. We propose a practical method for dynamic batch size adaptation. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance proportionally to the value of the objective function, removing the need for the aforementioned learning rate decrease. In contrast to recent related work, our algorithm couples the batch size to the learning rate, directly reflecting the known relationship between the two. On three image classification benchmarks, our batch size adaptation yields faster optimization convergence, while simultaneously simplifying learning rate tuning. A TensorFlow implementation is available.

ps pn

Code link (url) Project Page [BibTex]

Code link (url) Project Page [BibTex]


Thumb xl full outfit
Physical and Behavioral Factors Improve Robot Hug Quality

Block, A. E., Kuchenbecker, K. J.

Workshop Paper (2 pages) presented at the RO-MAN Workshop on Social Interaction and Multimodal Expression for Socially Intelligent Robots, Lisbon, Portugal, August 2017 (misc)

Abstract
A hug is one of the most basic ways humans can express affection. As hugs are so common, a natural progression of robot development is to have robots one day hug humans as seamlessly as these intimate human-human interactions occur. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a warm, soft, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot char- acteristics and nine randomly ordered trials with varied hug pressure and duration. We found that people prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Causal Discovery from Nonstationary/Heterogeneous Data: Skeleton Estimation and Orientation Determination

Zhang, K., Huang, B., Zhang, J., Glymour, C., Schölkopf, B.

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pages: 1347-1353, (Editors: Carles Sierra), August 2017 (conference)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


Thumb xl bodytalk
Crowdshaping Realistic 3D Avatars with Words

Streuber, S., Ramirez, M. Q., Black, M., Zuffi, S., O’Toole, A., Hill, M. Q., Hahn, C. A.

August 2017, Application PCT/EP2017/051954 (misc)

Abstract
A method for generating a body shape, comprising the steps: - receiving one or more linguistic descriptors related to the body shape; - retrieving an association between the one or more linguistic descriptors and a body shape; and - generating the body shape, based on the association.

ps

Google Patents [BibTex]

Google Patents [BibTex]


Thumb xl publications toc
An XY ϴz flexure mechanism with optimal stiffness properties

Lum, G. Z., Pham, M. T., Teo, T. J., Yang, G., Yeo, S. H., Sitti, M.

In 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pages: 1103-1110, July 2017 (inproceedings)

Abstract
The development of optimal XY θz flexure mechanisms, which can deliver high precision motion about the z-axis, and along the x- and y-axes is highly desirable for a wide range of micro/nano-positioning tasks pertaining to biomedical research, microscopy technologies and various industrial applications. Although maximizing the stiffness ratios is a very critical design requirement, the achievable translational and rotational stiffness ratios of existing XY θz flexure mechanisms are still restricted between 0.5 and 130. As a result, these XY θz flexure mechanisms are unable to fully optimize their workspace and capabilities to reject disturbances. Here, we present an optimal XY θz flexure mechanism, which is designed to have maximum stiffness ratios. Based on finite element analysis (FEA), it has translational stiffness ratio of 248, rotational stiffness ratio of 238 and a large workspace of 2.50 mm × 2.50 mm × 10°. Despite having such a large workspace, FEA also predicts that the proposed mechanism can still achieve a high bandwidth of 70 Hz. In comparison, the bandwidth of similar existing flexure mechanisms that can deflect more than 0.5 mm or 0.5° is typically less than 45 Hz. Hence, the high stiffness ratios of the proposed mechanism are achieved without compromising its dynamic performance. Preliminary experimental results pertaining to the mechanism's translational actuating stiffness and bandwidth were in agreement with the FEA predictions as the deviation was within 10%. In conclusion, the proposed flexure mechanism exhibits superior performance and can be used across a wide range of applications.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl publications toc
Positioning of drug carriers using permanent magnet-based robotic system in three-dimensional space

Khalil, I. S. M., Alfar, A., Tabak, A. F., Klingner, A., Stramigioli, S., Sitti, M.

In 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pages: 1117-1122, July 2017 (inproceedings)

Abstract
Magnetic control of drug carriers using systems with open-configurations is essential to enable scaling to the size of in vivo applications. In this study, we demonstrate motion control of paramagnetic microparticles in a low Reynolds number fluid, using a permanent magnet-based robotic system with an open-configuration. The microparticles are controlled in three-dimensional (3D) space using a cylindrical NdFeB magnet that is fixed to the end-effector of a robotic arm. We develop a kinematic map between the position of the microparticles and the configuration of the robotic arm, and use this map as a basis of a closed-loop control system based on the position of the microparticles. Our experimental results show the ability of the robot configuration to control the exerted field gradient on the dipole of the microparticles, and achieve positioning in 3D space with maximum error of 300 µm and 600 µm in the steady-state during setpoint and trajectory tracking, respectively.

pi

DOI [BibTex]

DOI [BibTex]


no image
Self-assembly of micro/nanosystems across scales and interfaces

Mastrangeli, M.

In 2017 19th International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS), pages: 676 - 681, IEEE, July 2017 (inproceedings)

Abstract
Steady progress in understanding and implementation are establishing self-assembly as a versatile, parallel and scalable approach to the fabrication of transducers. In this contribution, I illustrate the principles and reach of self-assembly with three applications at different scales - namely, the capillary self-alignment of millimetric components, the sealing of liquid-filled polymeric microcapsules, and the accurate capillary assembly of single nanoparticles - and propose foreseeable directions for further developments.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 1611.04399 image
Joint Graph Decomposition and Node Labeling by Local Search

Levinkov, E., Uhrig, J., Tang, S., Omran, M., Insafutdinov, E., Kirillov, A., Rother, C., Brox, T., Schiele, B., Andres, B.

In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 1904-1912, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

ps

PDF Supplementary DOI Project Page [BibTex]

PDF Supplementary DOI Project Page [BibTex]


Thumb xl teaser
Dynamic FAUST: Registering Human Bodies in Motion

Bogo, F., Romero, J., Pons-Moll, G., Black, M. J.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
While the ready availability of 3D scan data has influenced research throughout computer vision, less attention has focused on 4D data; that is 3D scans of moving nonrigid objects, captured over time. To be useful for vision research, such 4D scans need to be registered, or aligned, to a common topology. Consequently, extending mesh registration methods to 4D is important. Unfortunately, no ground-truth datasets are available for quantitative evaluation and comparison of 4D registration methods. To address this we create a novel dataset of high-resolution 4D scans of human subjects in motion, captured at 60 fps. We propose a new mesh registration method that uses both 3D geometry and texture information to register all scans in a sequence to a common reference topology. The approach exploits consistency in texture over both short and long time intervals and deals with temporal offsets between shape and texture capture. We show how using geometry alone results in significant errors in alignment when the motions are fast and non-rigid. We evaluate the accuracy of our registration and provide a dataset of 40,000 raw and aligned meshes. Dynamic FAUST extends the popular FAUST dataset to dynamic 4D data, and is available for research purposes at http://dfaust.is.tue.mpg.de.

ps

pdf video Project Page Project Page Project Page [BibTex]

pdf video Project Page Project Page Project Page [BibTex]


Thumb xl surrealin
Learning from Synthetic Humans

Varol, G., Romero, J., Martin, X., Mahmood, N., Black, M. J., Laptev, I., Schmid, C.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.

ps

arXiv project data Project Page Project Page [BibTex]

arXiv project data Project Page Project Page [BibTex]


Thumb xl martinez
On human motion prediction using recurrent neural networks

Martinez, J., Black, M. J., Romero, J.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality. Following the success of deep learning methods in several computer vision tasks, recent work has focused on using deep recurrent neural networks (RNNs) to model human motion, with the goal of learning time-dependent representations that perform tasks such as short-term motion prediction and long-term human motion synthesis. We examine recent work, with a focus on the evaluation methodologies commonly used in the literature, and show that, surprisingly, state-of-the-art performance can be achieved by a simple baseline that does not attempt to model motion at all. We investigate this result, and analyze recent RNN methods by looking at the architectures, loss functions, and training procedures used in state-of-the-art approaches. We propose three changes to the standard RNN models typically used for human motion, which result in a simple and scalable RNN architecture that obtains state-of-the-art performance on human motion prediction.

ps

arXiv Project Page [BibTex]

arXiv Project Page [BibTex]


Thumb xl joel slow flow crop
Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data

Janai, J., Güney, F., Wulff, J., Black, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 1406-1416, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur.

avg ps

pdf suppmat Project page Video DOI Project Page [BibTex]

pdf suppmat Project page Video DOI Project Page [BibTex]


Thumb xl mrflow
Optical Flow in Mostly Rigid Scenes

Wulff, J., Sevilla-Lara, L., Black, M. J.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 6911-6920, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
The optical flow of natural scenes is a combination of the motion of the observer and the independent motion of objects. Existing algorithms typically focus on either recovering motion and structure under the assumption of a purely static world or optical flow for general unconstrained scenes. We combine these approaches in an optical flow algorithm that estimates an explicit segmentation of moving objects from appearance and physical constraints. In static regions we take advantage of strong constraints to jointly estimate the camera motion and the 3D structure of the scene over multiple frames. This allows us to also regularize the structure instead of the motion. Our formulation uses a Plane+Parallax framework, which works even under small baselines, and reduces the motion estimation to a one-dimensional search problem, resulting in more accurate estimation. In moving regions the flow is treated as unconstrained, and computed with an existing optical flow method. The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art results on both the MPISintel and KITTI-2015 benchmarks.

ps

pdf SupMat video code Project Page [BibTex]

pdf SupMat video code Project Page [BibTex]


Thumb xl img03
OctNet: Learning Deep 3D Representations at High Resolutions

Riegler, G., Ulusoy, O., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.

avg ps

pdf suppmat Project Page Video Project Page [BibTex]

pdf suppmat Project Page Video Project Page [BibTex]


no image
Flexible Spatio-Temporal Networks for Video Prediction

Lu, C., Hirsch, M., Schölkopf, B.

Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 2137-2145, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (conference)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]