Header logo is


2017


The Numerics of GANs
The Numerics of GANs

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

Abstract
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.

avg

pdf Project Page [BibTex]

2017


pdf Project Page [BibTex]


no image
Synchronicity Trumps Mischief in Rhythmic Human-Robot Social-Physical Interaction

Fitter, N. T., Kuchenbecker, K. J.

In Proceedings of the International Symposium on Robotics Research (ISRR), Puerto Varas, Chile, December 2017 (inproceedings) In press

Abstract
Hand-clapping games and other forms of rhythmic social-physical interaction might help foster human-robot teamwork, but the design of such interactions has scarcely been explored. We leveraged our prior work to enable the Rethink Robotics Baxter Research Robot to competently play one-handed tempo-matching hand-clapping games with a human user. To understand how such a robot’s capabilities and behaviors affect user perception, we created four versions of this interaction: the hand clapping could be initiated by either the robot or the human, and the non-initiating partner could be either cooperative, yielding synchronous motion, or mischievously uncooperative. Twenty adults tested two clapping tempos in each of these four interaction modes in a random order, rating every trial on standardized scales. The study results showed that having the robot initiate the interaction gave it a more dominant perceived personality. Despite previous results on the intrigue of misbehaving robots, we found that moving synchronously with the robot almost always made the interaction more enjoyable, less mentally taxing, less physically demanding, and lower effort for users than asynchronous interactions caused by robot or human mischief. Taken together, our results indicate that cooperative rhythmic social-physical interaction has the potential to strengthen human-robot partnerships.

hi

[BibTex]

[BibTex]


Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Sparsity Invariant CNNs
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


OctNetFusion: Learning Depth Fusion from Data
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page Project Page [BibTex]

pdf Video 1 Video 2 Project Page Project Page [BibTex]


Direct Visual Odometry for a Fisheye-Stereo Camera
Direct Visual Odometry for a Fisheye-Stereo Camera

Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

Abstract
We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


A Robotic Framework to Overcome Sensory Overload in Children on the Autism Spectrum: A Pilot Study
A Robotic Framework to Overcome Sensory Overload in Children on the Autism Spectrum: A Pilot Study

Javed, H., Burns, R., Jeon, M., Howard, A., Park, C. H.

In International Conference on Intelligent Robots and Systems (IROS) 2017, International Conference on Intelligent Robots and Systems, September 2017 (inproceedings)

Abstract
This paper discusses a novel framework designed to provide sensory stimulation to children with Autism Spectrum Disorder (ASD). The set up consists of multi-sensory stations to stimulate visual/auditory/olfactory/gustatory/tactile/vestibular senses, together with a robotic agent that navigates through each station responding to the different stimuli. We hypothesize that the robot’s responses will help children learn acceptable ways to respond to stimuli that might otherwise trigger sensory overload. Preliminary results from a pilot study conducted to examine the effectiveness of such a setup were encouraging and are described briefly in this text.

hi

[BibTex]

[BibTex]


An Interactive Robotic System for Promoting Social Engagement
An Interactive Robotic System for Promoting Social Engagement

Burns, R., Javed, H., Jeon, M., Howard, A., Park, C. H.

In International Conference on Intelligent Robots and Systems (IROS) 2017, International Conference on Intelligent Robots and Systems, September 2017 (inproceedings)

Abstract
This abstract (and poster) is a condensed version of Burns' Master's thesis and related journal article. It discusses the use of imitation via robotic motion learning to improve human-robot interaction. It focuses on the preliminary results from a pilot study of 12 subjects. We hypothesized that the robot's use of imitation will increase the user's openness towards engaging with the robot. Post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts. These results point to an increased user interest in engagement fueled by personalized imitation during interaction.

hi

[BibTex]

[BibTex]


Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes
Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes

Alhaija, H. A., Mustikovela, S. K., Mescheder, L., Geiger, A., Rother, C.

In Proceedings of the British Machine Vision Conference 2017, Proceedings of the British Machine Vision Conference, September 2017 (inproceedings)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. This allows us to create realistic composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D shapes of the target object category. We demonstrate the utility of the proposed approach for training a state-of-the-art high-capacity deep model for semantic instance segmentation. In particular, we consider the task of segmenting car instances on the KITTI dataset which we have annotated with pixel-accurate ground truth. Our experiments demonstrate that models trained on augmented imagery generalize better than those trained on synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings of the 34th International Conference on Machine Learning, 70, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (inproceedings)

Abstract
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

avg

pdf suppmat Project Page arxiv-version Project Page [BibTex]

pdf suppmat Project Page arxiv-version Project Page [BibTex]


no image
Stiffness Perception during Pinching and Dissection with Teleoperated Haptic Forceps

Ng, C., Zareinia, K., Sun, Q., Kuchenbecker, K. J.

In Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 456-463, Lisbon, Portugal, August 2017 (inproceedings)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Learning local feature aggregation functions with backpropagation
Learning local feature aggregation functions with backpropagation

Paschalidou, D., Katharopoulos, A., Diou, C., Delopoulos, A.

In IEEE, Signal Processing Conference (EUSIPCO), 25th European, August 2017 (inproceedings)

Abstract
This paper introduces a family of local feature aggregation functions and a novel method to estimate their parameters, such that they generate optimal representations for classification (or any task that can be expressed as a cost function minimization problem). To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation function parameters. Experiments on synthetic datasets indicate that our method discovers parameters that model the class-relevant information in addition to the local feature space. Further experiments on a variety of motion and visual descriptors, both on image and video datasets, show that our method outperforms other state-of-the-art local feature aggregation functions, such as Bag of Words, Fisher Vectors and VLAD, by a large margin.

avg

pdf code poster link (url) DOI [BibTex]

pdf code poster link (url) DOI [BibTex]


Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data
Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data

Janai, J., Güney, F., Wulff, J., Black, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 1406-1416, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur.

avg ps

pdf suppmat Project page Video DOI Project Page [BibTex]

pdf suppmat Project page Video DOI Project Page [BibTex]


OctNet: Learning Deep 3D Representations at High Resolutions
OctNet: Learning Deep 3D Representations at High Resolutions

Riegler, G., Ulusoy, O., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.

avg ps

pdf suppmat Project Page Video Project Page [BibTex]

pdf suppmat Project Page Video Project Page [BibTex]


A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos
A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos

Schöps, T., Schönberger, J. L., Galliani, S., Sattler, T., Schindler, K., Pollefeys, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Motivated by the limitations of existing multi-view stereo benchmarks, we present a novel dataset for this task. Towards this goal, we recorded a variety of indoor and outdoor scenes using a high-precision laser scanner and captured both high-resolution DSLR imagery as well as synchronized low-resolution stereo videos with varying fields-of-view. To align the images with the laser scans, we propose a robust technique which minimizes photometric errors conditioned on the geometry. In contrast to previous datasets, our benchmark provides novel challenges and covers a diverse set of viewpoints and scene types, ranging from natural scenes to man-made indoor and outdoor environments. Furthermore, we provide data at significantly higher temporal and spatial resolution. Our benchmark is the first to cover the important use case of hand-held mobile devices while also providing high-resolution DSLR camera images. We make our datasets and an online evaluation server available at http://www.eth3d.net.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Toroidal Constraints for Two Point Localization Under High Outlier Ratios
Toroidal Constraints for Two Point Localization Under High Outlier Ratios

Camposeco, F., Sattler, T., Cohen, A., Geiger, A., Pollefeys, M.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Localizing a query image against a 3D model at large scale is a hard problem, since 2D-3D matches become more and more ambiguous as the model size increases. This creates a need for pose estimation strategies that can handle very low inlier ratios. In this paper, we draw new insights on the geometric information available from the 2D-3D matching process. As modern descriptors are not invariant against large variations in viewpoint, we are able to find the rays in space used to triangulate a given point that are closest to a query descriptor. It is well known that two correspondences constrain the camera to lie on the surface of a torus. Adding the knowledge of direction of triangulation, we are able to approximate the position of the camera from \emphtwo matches alone. We derive a geometric solver that can compute this position in under 1 microsecond. Using this solver, we propose a simple yet powerful outlier filter which scales quadratically in the number of matches. We validate the accuracy of our solver and demonstrate the usefulness of our method in real world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page pdf Project Page [BibTex]


Semantic Multi-view Stereo: Jointly Estimating Objects and Voxels
Semantic Multi-view Stereo: Jointly Estimating Objects and Voxels

Ulusoy, A. O., Black, M. J., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Dense 3D reconstruction from RGB images is a highly ill-posed problem due to occlusions, textureless or reflective surfaces, as well as other challenges. We propose object-level shape priors to address these ambiguities. Towards this goal, we formulate a probabilistic model that integrates multi-view image evidence with 3D shape information from multiple objects. Inference in this model yields a dense 3D reconstruction of the scene as well as the existence and precise 3D pose of the objects in it. Our approach is able to recover fine details not captured in the input shapes while defaulting to the input models in occluded regions where image evidence is weak. Due to its probabilistic nature, the approach is able to cope with the approximate geometry of the 3D models as well as input shapes that are not present in the scene. We evaluate the approach quantitatively on several challenging indoor and outdoor datasets.

avg ps

YouTube pdf suppmat Project Page [BibTex]

YouTube pdf suppmat Project Page [BibTex]


Locomotion of light-driven soft microrobots through a hydrogel via local melting
Locomotion of light-driven soft microrobots through a hydrogel via local melting

Palagi, S., Mark, A. G., Melde, K., Qiu, T., Zeng, H., Parmeggiani, C., Martella, D., Wiersma, D. S., Fischer, P.

In 2017 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), pages: 1-5, July 2017 (inproceedings)

Abstract
Soft mobile microrobots whose deformation can be directly controlled by an external field can adapt to move in different environments. This is the case for the light-driven microrobots based on liquid-crystal elastomers (LCEs). Here we show that the soft microrobots can move through an agarose hydrogel by means of light-controlled travelling-wave motions. This is achieved by exploiting the inherent rise of the LCE temperature above the melting temperature of the agarose gel, which facilitates penetration of the microrobot through the hydrogel. The locomotion performance is investigated as a function of the travelling-wave parameters, showing that effective propulsion can be obtained by adapting the generated motion to the specific environmental conditions.

pf

DOI [BibTex]

DOI [BibTex]


no image
Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy

Mohan, M., Mendonca, R., Johnson, M. J.

In Proceedings of the IEEE International Conference on Rehabilitation Robotics (ICORR), London, UK, July 2017 (inproceedings)

Abstract
Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.

hi

DOI [BibTex]

DOI [BibTex]


no image
Design of a Parallel Continuum Manipulator for 6-DOF Fingertip Haptic Display

Young, E. M., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 599-604, Munich, Germany, June 2017, Finalist for best poster paper (inproceedings)

Abstract
Despite rapid advancements in the field of fingertip haptics, rendering tactile cues with six degrees of freedom (6 DOF) remains an elusive challenge. In this paper, we investigate the potential of displaying fingertip haptic sensations with a 6-DOF parallel continuum manipulator (PCM) that mounts to the user's index finger and moves a contact platform around the fingertip. Compared to traditional mechanisms composed of rigid links and discrete joints, PCMs have the potential to be strong, dexterous, and compact, but they are also more complicated to design. We define the design space of 6-DOF parallel continuum manipulators and outline a process for refining such a device for fingertip haptic applications. Following extensive simulation, we obtain 12 designs that meet our specifications, construct a manually actuated prototype of one such design, and evaluate the simulation's ability to accurately predict the prototype's motion. Finally, we demonstrate the range of deliverable fingertip tactile cues, including a normal force into the finger and shear forces tangent to the finger at three extreme points on the boundary of the fingertip.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
High Magnitude Unidirectional Haptic Force Display Using a Motor/Brake Pair and a Cable

Hu, S., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 394-399, Munich, Germany, June 2017 (inproceedings)

Abstract
Clever electromechanical design is required to make the force feedback delivered by a kinesthetic haptic interface both strong and safe. This paper explores a onedimensional haptic force display that combines a DC motor and a magnetic particle brake on the same shaft. Rather than a rigid linkage, a spooled cable connects the user to the actuators to enable a large workspace, reduce the moving mass, and eliminate the sticky residual force from the brake. This design combines the high torque/power ratio of the brake and the active output capabilities of the motor to provide a wider range of forces than can be achieved with either actuator alone. A prototype of this device was built, its performance was characterized, and it was used to simulate constant force sources and virtual springs and dampers. Compared to the conventional design of using only a motor, the hybrid device can output higher unidirectional forces at the expense of free space feeling less free.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A Stimulus-Response Model Of Therapist-Patient Interactions In Task-Oriented Stroke Therapy Can Guide Robot-Patient Interactions

Johnson, M., Mohan, M., Mendonca, R.

In Proceedings of the Annual Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) Conference, New Orleans, USA, June 2017 (inproceedings)

Abstract
Current robot-patient interactions do not accurately model therapist-patient interactions in task-oriented stroke therapy. We analyzed patient-therapist interactions in task-oriented stroke therapy captured in 8 videos. We developed a model of the interaction between a patient and a therapist that can be overlaid on a stimulus-response paradigm where the therapist and the patient take on a set of acting states or roles and are motivated to move from one role to another when certain physical or verbal stimuli or cues are sensed and received. We examined how the model varies across 8 activities of daily living tasks and map this to a possible model for robot-patient interaction.

hi

link (url) [BibTex]

link (url) [BibTex]


no image
A Wrist-Squeezing Force-Feedback System for Robotic Surgery Training

Brown, J. D., Fernandez, J. N., Cohen, S. P., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 107-112, Munich, Germany, June 2017 (inproceedings)

Abstract
Over time, surgical trainees learn to compensate for the lack of haptic feedback in commercial robotic minimally invasive surgical systems. Incorporating touch cues into robotic surgery training could potentially shorten this learning process if the benefits of haptic feedback were sustained after it is removed. In this paper, we develop a wrist-squeezing haptic feedback system and evaluate whether it holds the potential to train novice da Vinci users to reduce the force they exert on a bimanual inanimate training task. Subjects were randomly divided into two groups according to a multiple baseline experimental design. Each of the ten participants moved a ring along a curved wire nine times while the haptic feedback was conditionally withheld, provided, and withheld again. The realtime tactile feedback of applied force magnitude significantly reduced the integral of the force produced by the da Vinci tools on the task materials, and this result remained even when the haptic feedback was removed. Overall, our findings suggest that wrist-squeezing force feedback can play an essential role in helping novice trainees learn to minimize the force they exert with a surgical robot.

hi

DOI [BibTex]

DOI [BibTex]


no image
Handling Scan-Time Parameters in Haptic Surface Classification

Burka, A., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 424-429, Munich, Germany, June 2017 (inproceedings)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Proton 2: Increasing the Sensitivity and Portability of a Visuo-haptic Surface Interaction Recorder

Burka, A., Rajvanshi, A., Allen, S., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 439-445, Singapore, May 2017 (inproceedings)

Abstract
The Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short) is a new handheld visuo-haptic sensing system that records surface interactions. We previously demonstrated system calibration and a classification task using external motion tracking. This paper details improvements in surface classification performance and removal of the dependence on external motion tracking, necessary before embarking on our goal of gathering a vast surface interaction dataset. Two experiments were performed to refine data collection parameters. After adjusting the placement and filtering of the Proton's high-bandwidth accelerometers, we recorded interactions between two differently-sized steel tooling ball end-effectors (diameter 6.35 and 9.525 mm) and five surfaces. Using features based on normal force, tangential force, end-effector speed, and contact vibration, we trained multi-class SVMs to classify the surfaces using 50 ms chunks of data from each end-effector. Classification accuracies of 84.5% and 91.5% respectively were achieved on unseen test data, an improvement over prior results. In parallel, we pursued on-board motion tracking, using the Proton's camera and fiducial markers. Motion tracks from the external and onboard trackers agree within 2 mm and 0.01 rad RMS, and the accuracy decreases only slightly to 87.7% when using onboard tracking for the 9.525 mm end-effector. These experiments indicate that the Proton 2 is ready for portable data collection.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Robot Therapist for Assisting in At-Home Rehabilitation of Shoulder Surgery Patients
Robot Therapist for Assisting in At-Home Rehabilitation of Shoulder Surgery Patients

(Recipient of Innovation & Entrepreneurship Prize)

Burns, R., Alborz, M., Chalup, Z., Downen, S., Genuino, K., Nayback, C., Nesbitt, N., Park, C. H.

In 2017 GW Research Days, Department of Biomedical Engineering Posters and Presentations, April 2017 (inproceedings)

Abstract
The number of middle-aged to elderly patients receiving shoulder surgery is increasing. However, statistically, very few of these patients perform the necessary at-home physical therapy regimen they are prescribed post-surgery. This results in longer recovery times and/or incomplete healing. We propose the use of a robotic therapist, with customized training and encouragement regimens, to increase physical therapy adherence and improve the patient’s recovery experience.

hi

link (url) [BibTex]

link (url) [BibTex]


Motion Learning for Emotional Interaction and Imitation of Children with Autism Spectrum Disorder
Motion Learning for Emotional Interaction and Imitation of Children with Autism Spectrum Disorder

(First place tie in category, "Biomedical Engineering, Graduate Research")

Burns, R., Cowin, S.

In 2017 GW Research Days, Department of Biomedical Engineering Posters and Presentations, April 2017 (inproceedings)

Abstract
We aim to use motion learning to teach a robot to imitate people's unique gestures. Our robot, ROBOTIS-OP2, can ultimately use imitation to practice social skills with children with autism. In this abstract, two methods of motion learning were compared: Dynamic motion primitives with least squares (DMP with WLS), and Dynamic motion primitives with a Gaussian Mixture Regression (DMP with GMR). Movements with sharp turns were most accurately reproduced using DMP with GMR. Additionally, more states are required to accurately recreate more complex gestures.

hi

link (url) [BibTex]

link (url) [BibTex]


Wireless micro-robots for endoscopic applications in urology
Wireless micro-robots for endoscopic applications in urology

Adams, F., Qiu, T., Mark, A. G., Melde, K., Palagi, S., Miernik, A., Fischer, P.

In Eur Urol Suppl, 16(3):e1914, March 2017 (inproceedings)

Abstract
Endoscopy is an essential and common method for both diagnostics and therapy in Urology. Current flexible endoscope is normally cable-driven, thus it is hard to be miniaturized and its reachability is restricted as only one bending section near the tip with one degree of freedom (DoF) is allowed. Recent progresses in micro-robotics offer a unique opportunity for medical inspections in minimally invasive surgery. Micro-robots are active devices that has a feature size smaller than one millimeter and can normally be actuated and controlled wirelessly. Magnetically actuated micro-robots have been demonstrated to propel through biological fluids.Here, we report a novel micro robotic arm, which is actuated wirelessly by ultrasound. It works as a miniaturized endoscope with a side length of ~1 mm, which fits through the 3 Fr. tool channel of a cystoscope, and successfully performs an active cystoscopy in a rabbit bladder.

pf

link (url) DOI [BibTex]


SurfaceNet: An End-to-End 3D Neural Network for Multiview Stereopsis
SurfaceNet: An End-to-End 3D Neural Network for Multiview Stereopsis

Ji, M., Gall, J., Zheng, H., Liu, Y., Fang, L.

In IEEE International Conference on Computer Vision (ICCV), 2017, IEEE Computer Society, 2017 IEEE International Conference on Computer Vision (ICCV), 2017 (inproceedings)

Abstract
This paper proposes an end-to-end learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photo-consistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an end-to-end fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the large-scale DTU benchmark. Code is available in https://github.com/mjiUST/SurfaceNet

avg

link (url) [BibTex]

link (url) [BibTex]


Roughness perception of virtual textures displayed by electrovibration on touch screens
Roughness perception of virtual textures displayed by electrovibration on touch screens

Vardar, Y., Isleyen, A., Saleem, M. K., Basdogan, C.

In 2017 IEEE World Haptics Conference (WHC), pages: 263-268, 2017 (inproceedings)

Abstract
In this study, we have investigated the human roughness perception of periodical textures on an electrostatic display by conducting psychophysical experiments with 10 subjects. To generate virtual textures, we used low frequency unipolar pulse waves in different waveform (sinusoidal, square, saw-tooth, triangle), and spacing. We modulated these waves with a 3kHz high frequency sinusoidal carrier signal to minimize perceptional differences due to the electrical filtering of human finger and eliminate low-frequency distortions. The subjects were asked to rate 40 different macro textures on a Likert scale of 1-7. We also collected the normal and tangential forces acting on the fingers of subjects during the experiment. The results of our user study showed that subjects perceived the square wave as the roughest while they perceived the other waveforms equally rough. The perceived roughness followed an inverted U-shaped curve as a function of groove width, but the peak point shifted to the left compared to the results of the earlier studies. Moreover, we found that the roughness perception of subjects is best correlated with the rate of change of the contact forces rather than themselves.

hi

vardar_whc2017 DOI [BibTex]

vardar_whc2017 DOI [BibTex]


no image
Feeling multiple edges: The tactile perception of short ultrasonic square reductions of the finger-surface friction

Gueorguiev, D., Vezzoli, E., Sednaoui, T., Grisoni, L., Lemaire-Semail, B.

In 2017 IEEE World Haptics Conference (WHC), pages: 125-129, 2017 (inproceedings)

hi

DOI [BibTex]

DOI [BibTex]

2015


Exploiting Object Similarity in 3D Reconstruction
Exploiting Object Similarity in 3D Reconstruction

Zhou, C., Güney, F., Wang, Y., Geiger, A.

In International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
Despite recent progress, reconstructing outdoor scenes in 3D from movable platforms remains a highly difficult endeavor. Challenges include low frame rates, occlusions, large distortions and difficult lighting conditions. In this paper, we leverage the fact that the larger the reconstructed area, the more likely objects of similar type and shape will occur in the scene. This is particularly true for outdoor scenes where buildings and vehicles often suffer from missing texture or reflections, but share similarity in 3D shape. We take advantage of this shape similarity by locating objects using detectors and jointly reconstructing them while learning a volumetric model of their shape. This allows us to reduce noise while completing missing surfaces as objects of similar shape benefit from all observations for the respective category. We evaluate our approach with respect to LIDAR ground truth on a novel challenging suburban dataset and show its advantages over the state-of-the-art.

avg ps

pdf suppmat [BibTex]

2015


pdf suppmat [BibTex]


FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation
FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation

Lenz, P., Geiger, A., Urtasun, R.

In International Conference on Computer Vision (ICCV), International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.

avg ps

pdf suppmat video project [BibTex]

pdf suppmat video project [BibTex]


Towards Probabilistic Volumetric Reconstruction using Ray Potentials
Towards Probabilistic Volumetric Reconstruction using Ray Potentials

(Best Paper Award)

Ulusoy, A. O., Geiger, A., Black, M. J.

In 3D Vision (3DV), 2015 3rd International Conference on, pages: 10-18, Lyon, October 2015 (inproceedings)

Abstract
This paper presents a novel probabilistic foundation for volumetric 3-d reconstruction. We formulate the problem as inference in a Markov random field, which accurately captures the dependencies between the occupancy and appearance of each voxel, given all input images. Our main contribution is an approximate highly parallelized discrete-continuous inference algorithm to compute the marginal distributions of each voxel's occupancy and appearance. In contrast to the MAP solution, marginals encode the underlying uncertainty and ambiguity in the reconstruction. Moreover, the proposed algorithm allows for a Bayes optimal prediction with respect to a natural reconstruction loss. We compare our method to two state-of-the-art volumetric reconstruction algorithms on three challenging aerial datasets with LIDAR ground truth. Our experiments demonstrate that the proposed algorithm compares favorably in terms of reconstruction accuracy and the ability to expose reconstruction uncertainty.

avg ps

code YouTube pdf suppmat DOI Project Page [BibTex]

code YouTube pdf suppmat DOI Project Page [BibTex]


3D-printed Soft Microrobot for Swimming in Biological Fluids
3D-printed Soft Microrobot for Swimming in Biological Fluids

Qiu, T., Palagi, S., Fischer, P.

In Conf. Proc. IEEE Eng. Med. Biol. Soc., pages: 4922-4925, August 2015 (inproceedings)

Abstract
Microscopic artificial swimmers hold the potential to enable novel non-invasive medical procedures. In order to ease their translation towards real biomedical applications, simpler designs as well as cheaper yet more reliable materials and fabrication processes should be adopted, provided that the functionality of the microrobots can be kept. A simple single-hinge design could already enable microswimming in non-Newtonian fluids, which most bodily fluids are. Here, we address the fabrication of such single-hinge microrobots with a 3D-printed soft material. Firstly, a finite element model is developed to investigate the deformability of the 3D-printed microstructure under typical values of the actuating magnetic fields. Then the microstructures are fabricated by direct 3D-printing of a soft material and their swimming performances are evaluated. The speeds achieved with the 3D-printed microrobots are comparable to those obtained in previous work with complex fabrication procedures, thus showing great promise for 3D-printed microrobots to be operated in biological fluids.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Assessing human-human therapy kinematics for retargeting to human-robot therapy

Johnson, M. J., Christopher, S. M., Mohan, M., Mendonca, R.

In Proceedings of the IEEE International Conference on Rehabilitation Robotics (ICORR), Singapore, August 2015 (inproceedings)

Abstract
In this paper, we present experiments examining the accuracy of data collected from a Kinect sensor for capturing close interactive actions of a therapist with a patient during stroke rehabilitation. Our long-term goal is to map human-human interactions such as these patient-therapist ones onto human-robot interactions. In many robot interaction scenarios, the robot does not mimic interaction between two or more humans, which is a major part of stroke therapy. The Kinect works for functional tasks such as a reaching task where the interaction to be retargeted by the robot is minimal to none; though this data is not good for a functional task involving touching another person. We demonstrate that the noisy data from Kinect does not produce a system robust enough to be for remapping to a humanoid robot a therapit's movements when in contact with a person.

hi

DOI [BibTex]

DOI [BibTex]


Displets: Resolving Stereo Ambiguities using Object Knowledge
Displets: Resolving Stereo Ambiguities using Object Knowledge

Güney, F., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 4165-4175, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
Stereo techniques have witnessed tremendous progress over the last decades, yet some aspects of the problem still remain challenging today. Striking examples are reflecting and textureless surfaces which cannot easily be recovered using traditional local regularizers. In this paper, we therefore propose to regularize over larger distances using object-category specific disparity proposals (displets) which we sample using inverse graphics techniques based on a sparse disparity estimate and a semantic segmentation of the image. The proposed displets encode the fact that objects of certain categories are not arbitrarily shaped but typically exhibit regular structures. We integrate them as non-local regularizer for the challenging object class 'car' into a superpixel based CRF framework and demonstrate its benefits on the KITTI stereo evaluation.

avg ps

pdf abstract suppmat [BibTex]

pdf abstract suppmat [BibTex]


no image
Toward a large-scale visuo-haptic dataset for robotic learning

Burka, A., Hu, S., Krishnan, S., Kuchenbecker, K. J., Hendricks, L. A., Gao, Y., Darrell, T.

In Proc. CVPR Workshop on the Future of Datasets in Vision, 2015 (inproceedings)

hi

Project Page [BibTex]

Project Page [BibTex]


Object Scene Flow for Autonomous Vehicles
Object Scene Flow for Autonomous Vehicles

Menze, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 3061-3070, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which can't be handled by existing methods.

avg ps

pdf abstract suppmat DOI [BibTex]

pdf abstract suppmat DOI [BibTex]


no image
Detecting Lumps in Simulated Tissue via Palpation with a BioTac

Hui, J., Block, A., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, 2015, Work-in-progress paper. Poster presentation given by Hui (inproceedings)

hi

[BibTex]

[BibTex]


no image
Analysis of the Instrument Vibrations and Contact Forces Caused by an Expert Robotic Surgeon Doing FRS Tasks

Brown, J. D., O’Brien, C., Miyasaka, K., Dumon, K. R., Kuchenbecker, K. J.

In Proc. Hamlyn Symposium on Medical Robotics, pages: 75-76, London, England, June 2015, Poster presentation given by Brown (inproceedings)

hi

[BibTex]

[BibTex]


no image
Should Haptic Texture Vibrations Respond to User Force and Speed?

Culbertson, H., Kuchenbecker, K. J.

In IEEE World Haptics Conference, pages: 106 - 112, Evanston, Illinois, USA, June 2015, Oral presentation given by Culbertson (inproceedings)

hi

[BibTex]

[BibTex]


no image
Enabling the Baxter Robot to Play Hand-Clapping Games

Fitter, N. T., Neuburger, M., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, June 2015, Work-in-progress paper. Poster presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
Using IMU Data to Teach a Robot Hand-Clapping Games

Fitter, N. T., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 353-355, April 2015, Work-in-progress paper. Poster presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
Haptic Feedback in Transoral Robotic Surgery: A Feasibility Study

Bur, A. M., Gomez, E. D., Rassekh, C. H., Newman, J. G., Weinstein, G. S., Kuchenbecker, K. J.

In Proc. Annual Meeting of the Triological Society at COSM, April 2015, Poster presentation given by Bur (inproceedings)

hi

[BibTex]

[BibTex]


no image
Human Machine Interface for Dexto Eka: - The humanoid robot

Kumra, S., Mohan, M., Gupta, S., Vaswani, H.

In Proceedings of the IEEE International Conference on Robotics, Automation, Control and Embedded Systems (RACE), Chennai, India, Febuary 2015 (inproceedings)

Abstract
This paper illustrates hybrid control system of the humanoid robot, Dexto:Eka: focusing on the dependent or slave mode. Efficiency of any system depends on the fluid operation of its control system. Here, we elucidate the control of 12 DoF robotic arms and an omnidirectional mecanum wheel drive using an exo-frame, and a Graphical User Interface (GUI) and a control column. This paper comprises of algorithms, control mechanisms and overall flow of execution for the regulation of robotic arms, graphical user interface and locomotion.

hi

DOI [BibTex]

DOI [BibTex]


no image
Conception and development of Dexto:Eka: The Humanoid Robot - Part IV

Kumra, S., Mohan, M., Vaswani, H., Gupta, S.

In Proceedings of the IEEE International Conference on Robotics, Automation, Control and Embedded Systems (RACE), Febuary 2015 (inproceedings)

Abstract
This paper elucidates the fourth phase of the development of `Dexto:Eka: - The Humanoid Robot'. It lays special emphasis on the conception of the locomotion drive and the development of vision based system that aids navigation and tele-operation. The first three phases terminated with the completion of two robotic arms with six degrees of freedom each, structural development and the creation of a human machine interface that included an exo-frame, a control column and a graphical user interface. This phase also involved the enhancement of the exo-frame to a vision based system using a Kinect camera. The paper also focuses on the reasons behind choosing the locomotion drive and the benefits it has.

hi

DOI [BibTex]

DOI [BibTex]


no image
Design and Validation of a Practical Simulator for Transoral Robotic Surgery

Bur, A. M., Gomez, E. D., Chalian, A. A., Newman, J. G., Weinstein, G. S., Kuchenbecker, K. J.

In Proc. Society for Robotic Surgery Annual Meeting: Transoral Program, (T8), February 2015, Oral presentation given by Bur (inproceedings)

hi

[BibTex]

[BibTex]


Joint 3D Object and Layout Inference from a single RGB-D Image
Joint 3D Object and Layout Inference from a single RGB-D Image

(Best Paper Award)

Geiger, A., Wang, C.

In German Conference on Pattern Recognition (GCPR), 9358, pages: 183-195, Lecture Notes in Computer Science, Springer International Publishing, 2015 (inproceedings)

Abstract
Inferring 3D objects and the layout of indoor scenes from a single RGB-D image captured with a Kinect camera is a challenging task. Towards this goal, we propose a high-order graphical model and jointly reason about the layout, objects and superpixels in the image. In contrast to existing holistic approaches, our model leverages detailed 3D geometry using inverse graphics and explicitly enforces occlusion and visibility constraints for respecting scene properties and projective geometry. We cast the task as MAP inference in a factor graph and solve it efficiently using message passing. We evaluate our method with respect to several baselines on the challenging NYUv2 indoor dataset using 21 object categories. Our experiments demonstrate that the proposed method is able to infer scenes with a large degree of clutter and occlusions.

avg ps

pdf suppmat video project DOI [BibTex]

pdf suppmat video project DOI [BibTex]


Discrete Optimization for Optical Flow
Discrete Optimization for Optical Flow

Menze, M., Heipke, C., Geiger, A.

In German Conference on Pattern Recognition (GCPR), 9358, pages: 16-28, Springer International Publishing, 2015 (inproceedings)

Abstract
We propose to look at large-displacement optical flow from a discrete point of view. Motivated by the observation that sub-pixel accuracy is easily obtained given pixel-accurate optical flow, we conjecture that computing the integral part is the hardest piece of the problem. Consequently, we formulate optical flow estimation as a discrete inference problem in a conditional random field, followed by sub-pixel refinement. Naive discretization of the 2D flow space, however, is intractable due to the resulting size of the label set. In this paper, we therefore investigate three different strategies, each able to reduce computation and memory demands by several orders of magnitude. Their combination allows us to estimate large-displacement optical flow both accurately and efficiently and demonstrates the potential of discrete optimization for optical flow. We obtain state-of-the-art performance on MPI Sintel and KITTI.

avg ps

pdf suppmat project DOI [BibTex]

pdf suppmat project DOI [BibTex]