Header logo is


2017


Thumb xl larsnips
The Numerics of GANs

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

Abstract
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.

avg

pdf Project Page [BibTex]

2017


pdf Project Page [BibTex]


Thumb xl teaser iccv2017
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Thumb xl jonas teaser
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Thumb xl gernot teaser
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page Project Page [BibTex]

pdf Video 1 Video 2 Project Page Project Page [BibTex]


Thumb xl andreas teaser
Direct Visual Odometry for a Fisheye-Stereo Camera

Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

Abstract
We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Thumb xl hassan paper teasere
Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes

Alhaija, H. A., Mustikovela, S. K., Mescheder, L., Geiger, A., Rother, C.

In Proceedings of the British Machine Vision Conference 2017, Proceedings of the British Machine Vision Conference, September 2017 (inproceedings)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. This allows us to create realistic composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D shapes of the target object category. We demonstrate the utility of the proposed approach for training a state-of-the-art high-capacity deep model for semantic instance segmentation. In particular, we consider the task of segmenting car instances on the KITTI dataset which we have annotated with pixel-accurate ground truth. Our experiments demonstrate that models trained on augmented imagery generalize better than those trained on synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Swimming in low reynolds numbers using planar and helical flagellar waves

Khalil, I. S. M., Tabak, A. F., Seif, M. A., Klingner, A., Adel, B., Sitti, M.

In International Conference on Intelligent Robots and Systems (IROS) 2017, pages: 1907-1912, International Conference on Intelligent Robots and Systems, September 2017 (inproceedings)

Abstract
In travelling towards the oviducts, sperm cells undergo transitions between planar to helical flagellar propulsion by a beating tail based on the viscosity of the environment. In this work, we aim to model and mimic this behaviour in low Reynolds number fluids using externally actuated soft robotic sperms. We numerically investigate the effects of transition between planar to helical flagellar propulsion on the swimming characteristics of the robotic sperm using a model based on resistive-force theory to study the role of viscous forces on its flexible tail. Experimental results are obtained using robots that contain magnetic particles within the polymer matrix of its head and an ultra-thin flexible tail. The planar and helical flagellar propulsion are achieved using in-plane and out-of-plane uniform fields with sinusoidally varying components, respectively. We experimentally show that the swimming speed of the robotic sperm increases by a factor of 1.4 (fluid viscosity 5 Pa.s) when it undergoes a controlled transition between planar to helical flagellar propulsion, at relatively low actuation frequencies.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl img01
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings of the 34th International Conference on Machine Learning, 70, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (inproceedings)

Abstract
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

avg

pdf suppmat Project Page arxiv-version Project Page [BibTex]

pdf suppmat Project Page arxiv-version Project Page [BibTex]


Thumb xl publications toc
An XY ϴz flexure mechanism with optimal stiffness properties

Lum, G. Z., Pham, M. T., Teo, T. J., Yang, G., Yeo, S. H., Sitti, M.

In 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pages: 1103-1110, July 2017 (inproceedings)

Abstract
The development of optimal XY θz flexure mechanisms, which can deliver high precision motion about the z-axis, and along the x- and y-axes is highly desirable for a wide range of micro/nano-positioning tasks pertaining to biomedical research, microscopy technologies and various industrial applications. Although maximizing the stiffness ratios is a very critical design requirement, the achievable translational and rotational stiffness ratios of existing XY θz flexure mechanisms are still restricted between 0.5 and 130. As a result, these XY θz flexure mechanisms are unable to fully optimize their workspace and capabilities to reject disturbances. Here, we present an optimal XY θz flexure mechanism, which is designed to have maximum stiffness ratios. Based on finite element analysis (FEA), it has translational stiffness ratio of 248, rotational stiffness ratio of 238 and a large workspace of 2.50 mm × 2.50 mm × 10°. Despite having such a large workspace, FEA also predicts that the proposed mechanism can still achieve a high bandwidth of 70 Hz. In comparison, the bandwidth of similar existing flexure mechanisms that can deflect more than 0.5 mm or 0.5° is typically less than 45 Hz. Hence, the high stiffness ratios of the proposed mechanism are achieved without compromising its dynamic performance. Preliminary experimental results pertaining to the mechanism's translational actuating stiffness and bandwidth were in agreement with the FEA predictions as the deviation was within 10%. In conclusion, the proposed flexure mechanism exhibits superior performance and can be used across a wide range of applications.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl publications toc
Positioning of drug carriers using permanent magnet-based robotic system in three-dimensional space

Khalil, I. S. M., Alfar, A., Tabak, A. F., Klingner, A., Stramigioli, S., Sitti, M.

In 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pages: 1117-1122, July 2017 (inproceedings)

Abstract
Magnetic control of drug carriers using systems with open-configurations is essential to enable scaling to the size of in vivo applications. In this study, we demonstrate motion control of paramagnetic microparticles in a low Reynolds number fluid, using a permanent magnet-based robotic system with an open-configuration. The microparticles are controlled in three-dimensional (3D) space using a cylindrical NdFeB magnet that is fixed to the end-effector of a robotic arm. We develop a kinematic map between the position of the microparticles and the configuration of the robotic arm, and use this map as a basis of a closed-loop control system based on the position of the microparticles. Our experimental results show the ability of the robot configuration to control the exerted field gradient on the dipole of the microparticles, and achieve positioning in 3D space with maximum error of 300 µm and 600 µm in the steady-state during setpoint and trajectory tracking, respectively.

pi

DOI [BibTex]

DOI [BibTex]


no image
Self-assembly of micro/nanosystems across scales and interfaces

Mastrangeli, M.

In 2017 19th International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS), pages: 676 - 681, IEEE, July 2017 (inproceedings)

Abstract
Steady progress in understanding and implementation are establishing self-assembly as a versatile, parallel and scalable approach to the fabrication of transducers. In this contribution, I illustrate the principles and reach of self-assembly with three applications at different scales - namely, the capillary self-alignment of millimetric components, the sealing of liquid-filled polymeric microcapsules, and the accurate capillary assembly of single nanoparticles - and propose foreseeable directions for further developments.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl joel slow flow crop
Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data

Janai, J., Güney, F., Wulff, J., Black, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 1406-1416, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur.

avg ps

pdf suppmat Project page Video DOI Project Page [BibTex]

pdf suppmat Project page Video DOI Project Page [BibTex]


Thumb xl img03
OctNet: Learning Deep 3D Representations at High Resolutions

Riegler, G., Ulusoy, O., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.

avg ps

pdf suppmat Project Page Video Project Page [BibTex]

pdf suppmat Project Page Video Project Page [BibTex]


Thumb xl schoeps2017cvpr
A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos

Schöps, T., Schönberger, J. L., Galliani, S., Sattler, T., Schindler, K., Pollefeys, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Motivated by the limitations of existing multi-view stereo benchmarks, we present a novel dataset for this task. Towards this goal, we recorded a variety of indoor and outdoor scenes using a high-precision laser scanner and captured both high-resolution DSLR imagery as well as synchronized low-resolution stereo videos with varying fields-of-view. To align the images with the laser scans, we propose a robust technique which minimizes photometric errors conditioned on the geometry. In contrast to previous datasets, our benchmark provides novel challenges and covers a diverse set of viewpoints and scene types, ranging from natural scenes to man-made indoor and outdoor environments. Furthermore, we provide data at significantly higher temporal and spatial resolution. Our benchmark is the first to cover the important use case of hand-held mobile devices while also providing high-resolution DSLR camera images. We make our datasets and an online evaluation server available at http://www.eth3d.net.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Thumb xl camposeco2017cvpr
Toroidal Constraints for Two Point Localization Under High Outlier Ratios

Camposeco, F., Sattler, T., Cohen, A., Geiger, A., Pollefeys, M.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Localizing a query image against a 3D model at large scale is a hard problem, since 2D-3D matches become more and more ambiguous as the model size increases. This creates a need for pose estimation strategies that can handle very low inlier ratios. In this paper, we draw new insights on the geometric information available from the 2D-3D matching process. As modern descriptors are not invariant against large variations in viewpoint, we are able to find the rays in space used to triangulate a given point that are closest to a query descriptor. It is well known that two correspondences constrain the camera to lie on the surface of a torus. Adding the knowledge of direction of triangulation, we are able to approximate the position of the camera from \emphtwo matches alone. We derive a geometric solver that can compute this position in under 1 microsecond. Using this solver, we propose a simple yet powerful outlier filter which scales quadratically in the number of matches. We validate the accuracy of our solver and demonstrate the usefulness of our method in real world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page pdf Project Page [BibTex]


Thumb xl cvpr2017 landpsace
Semantic Multi-view Stereo: Jointly Estimating Objects and Voxels

Ulusoy, A. O., Black, M. J., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Dense 3D reconstruction from RGB images is a highly ill-posed problem due to occlusions, textureless or reflective surfaces, as well as other challenges. We propose object-level shape priors to address these ambiguities. Towards this goal, we formulate a probabilistic model that integrates multi-view image evidence with 3D shape information from multiple objects. Inference in this model yields a dense 3D reconstruction of the scene as well as the existence and precise 3D pose of the objects in it. Our approach is able to recover fine details not captured in the input shapes while defaulting to the input models in occluded regions where image evidence is weak. Due to its probabilistic nature, the approach is able to cope with the approximate geometry of the 3D models as well as input shapes that are not present in the scene. We evaluate the approach quantitatively on several challenging indoor and outdoor datasets.

avg ps

YouTube pdf suppmat Project Page [BibTex]

YouTube pdf suppmat Project Page [BibTex]


Thumb xl publications toc
Dynamic analysis on hexapedal water-running robot with compliant joints

Kim, H., Liu, Y., Jeong, K., Sitti, M., Seo, T.

In 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pages: 250-251, June 2017 (inproceedings)

Abstract
The dynamic analysis has been considered as one of the important design methods to design robots. In this research, we derive dynamic equation of hexapedal water-running robot to design compliant joints. The compliant joints that connect three bodies will be used to improve mobility and stability of water-running motion's pitch behavior. We considered all of parts as rigid body including links of six Klann mechanisms and three main frames. And then, we derived dynamic equation by using the Lagrangian method with external force of the water. We are expecting that the dynamic analysis is going to be used to design parts of the water running robot.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl publications toc
Design and actuation of a magnetic millirobot under a constant unidirectional magnetic field

Erin, O., Giltinan, J., Tsai, L., Sitti, M.

In Proceedings 2017 IEEE International Conference on Robotics and Automation (ICRA), pages: 3404-3410, IEEE, Piscataway, NJ, USA, IEEE International Conference on Robotics and Automation (ICRA), May 2017 (inproceedings)

Abstract
Magnetic untethered millirobots, which are actuated and controlled by remote magnetic fields, have been proposed for medical applications due to their ability to safely pass through tissues at long ranges. For example, magnetic resonance imaging (MRI) systems with a 3-7 T constant unidirectional magnetic field and 3D gradient coils have been used to actuate magnetic robots. Such magnetically constrained systems place limits on the degrees of freedom that can be actuated for untethered devices. This paper presents a design and actuation methodology for a magnetic millirobot that exhibits both position and orientation control in 2D under a magnetic field, dominated by a constant unidirectional magnetic field as found in MRI systems. Placing a spherical permanent magnet, which is free to rotate inside the millirobot and located away from the center of mass, allows the generation of net forces and torques with applied 3D magnetic field gradients. We model this system in a 3D planar case and experimentally demonstrate open-loop control of both position and orientation by the applied 2D field gradients. The actuation performance is characterized across the most important design variables, and we experimentally demonstrate that the proposed approach is feasible.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl publications toc
Magnetically actuated soft capsule endoscope for fine-needle aspiration biopsy

Son, D., Dogan, M. D., Sitti, M.

In Proceedings 2017 IEEE International Conference on Robotics and Automation (ICRA), pages: 1132-1139, IEEE, Piscataway, NJ, USA, IEEE International Conference on Robotics and Automation (ICRA), May 2017 (inproceedings)

Abstract
This paper presents a magnetically actuated soft capsule endoscope for fine-needle aspiration biopsy (B-MASCE) in the upper gastrointestinal tract. A thin and hollow needle is attached to the capsule, which can penetrate deeply into tissues to obtain subsurface biopsy sample. The design utilizes a soft elastomer body as a compliant mechanism to guide the needle. An internal permanent magnet provides a means for both actuation and tracking. The capsule is designed to roll towards its target and then deploy the biopsy needle in a precise location selected as the target area. B-MASCE is controlled by multiple custom-designed electromagnets while its position and orientation are tracked by a magnetic sensor array. In in vitro trials, B-MASCE demonstrated rolling locomotion and biopsy of a swine tissue model positioned inside an anatomical human stomach model. It was confirmed after the experiment that a tissue sample was retained inside the needle.

pi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl image toc
The use of clamping grips and friction pads by tree frogs for climbing curved surfaces

Endlein, T., Ji, A., Yuan, S., Hill, I., Wang, H., Barnes, W. J. P., Dai, Z., Sitti, M.

In Proc. R. Soc. B, 284(1849):20162867, Febuary 2017 (inproceedings)

Abstract
Most studies on the adhesive mechanisms of climbing animals have addressed attachment against flat surfaces, yet many animals can climb highly curved surfaces, like twigs and small branches. Here we investigated whether tree frogs use a clamping grip by recording the ground reaction forces on a cylindrical object with either a smooth or anti-adhesive, rough surface. Furthermore, we measured the contact area of fore and hindlimbs against differently sized transparent cylinders and the forces of individual pads and subarticular tubercles in restrained animals. Our study revealed that frogs use friction and normal forces of roughly a similar magnitude for holding on to cylindrical objects. When challenged with climbing a non-adhesive surface, the compressive forces between opposite legs nearly doubled, indicating a stronger clamping grip. In contrast to climbing flat surfaces, frogs increased the contact area on all limbs by engaging not just adhesive pads but also subarticular tubercles on curved surfaces. Our force measurements showed that tubercles can withstand larger shear stresses than pads. SEM images of tubercles revealed a similar structure to that of toe pads including the presence of nanopillars, though channels surrounding epithelial cells were less pronounced. The tubercles' smaller size, proximal location on the toes and shallow cells make them probably less prone to buckling and thus ideal for gripping curved surfaces.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl publications toc
Planning spin-walking locomotion for automatic grasping of microobjects by an untethered magnetic microgripper

Dong, X., Sitti, M.

In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages: 6612-6618, 2017 (inproceedings)

Abstract
Most demonstrated mobile microrobot tasks so far have been achieved via pick-and-placing and dynamic trapping with teleoperation or simple path following algorithms. In our previous work, an untethered magnetic microgripper has been developed which has advanced functions, such as gripping objects. Both teleoperated manipulation in 2D and 3D have been demonstrated. However, it is challenging to control the magnetic microgripper to carry out manipulation tasks, because the grasping of objects so far in the literature relies heavily on teleoperation, which takes several minutes with even a skilled human expert. Here, we propose a new spin-walking locomotion and an automated 2D grasping motion planner for the microgripper, which enables time-efficient automatic grasping of microobjects that has not been achieved yet for untethered microrobots. In its locomotion, the microgripper repeatedly rotates about two principal axes to regulate its pose and move precisely on a surface. The motion planner could plan different motion primitives for grasping and compensate the uncertainties in the motion by learning the uncertainties and planning accordingly. We experimentally demonstrated that, using the proposed method, the microgripper could align to the target pose with error less than 0.1 body length and grip the objects within 40 seconds. Our method could significantly improve the time efficiency of micro-scale manipulation and have potential applications in microassembly and biomedical engineering.

pi

DOI Project Page [BibTex]

DOI Project Page [BibTex]

2016


Thumb xl 07759726
Steering control of a water-running robot using an active tail

Kim, H., Jeong, K., Sitti, M., Seo, T.

In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pages: 4945-4950, October 2016 (inproceedings)

Abstract
Many highly dynamic novel mobile robots have been developed being inspired by animals. In this study, we are inspired by a basilisk lizard's ability to run and steer on water surface for a hexapedal robot. The robot has an active tail with a circular plate, which the robot rotates to steer on water. We dynamically modeled the platform and conducted simulations and experiments on steering locomotion with a bang-bang controller. The robot can steer on water by rotating the tail, and the controlled steering locomotion is stable. The dynamic modelling approximates the robot's steering locomotion and the trends of the simulations and experiments are similar, although there are errors between the desired and actual angles. The robot's maneuverability on water can be improved through further research.

pi

DOI [BibTex]

2016


DOI [BibTex]


Thumb xl 07523675
Targeting of cell mockups using sperm-shaped microrobots in vitro

Khalil, I. S., Tabak, A. F., Hosney, A., Klingner, A., Shalaby, M., Abdel-Kader, R. M., Serry, M., Sitti, M.

In Biomedical Robotics and Biomechatronics (BioRob), 2016 6th IEEE International Conference on, pages: 495-501, July 2016 (inproceedings)

Abstract
Sperm-shaped microrobots are controlled under the influence of weak oscillating magnetic fields (milliTesla range) to selectively target cell mockups (i.e., gas bubbles with average diameter of 200 μm). The sperm-shaped microrobots are fabricated by electrospinning using a solution of polystyrene, dimethylformamide, and iron oxide nanoparticles. These nanoparticles are concentrated within the head of the microrobot, and hence enable directional control along external magnetic fields. The magnetic dipole moment of the microrobot is characterized (using the flip-time technique) to be 1.4×10-11 A.m2, at magnetic field of 28 mT. In addition, the morphology of the microrobot is characterized using Scanning Electron Microscopy images. The characterized parameters and morphology are used in the simulation of the locomotion mechanism of the microrobot to prove that its motion depends on breaking the time-reversal symmetry, rather than pulling with the magnetic field gradient. We experimentally demonstrate that the microrobot can controllably follow S-shaped, U-shaped, and square paths, and selectively target the cell mockups using image guidance and under the influence of the oscillating magnetic fields.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl image toc
Analysis of the magnetic torque on a tilted permanent magnet for drug delivery in capsule robots

Munoz, F., Alici, G., Zhou, H., Li, W., Sitti, M.

In Advanced Intelligent Mechatronics (AIM), 2016 IEEE International Conference on, pages: 1386-1391, July 2016 (inproceedings)

Abstract
In this paper, we present the analysis of the torque transmitted to a tilted permanent magnet that is to be embedded in a capsule robot to achieve targeted drug delivery. This analysis is carried out by using an analytical model and experimental results for a small cubic permanent magnet that is driven by an external magnetic system made of an array of arc-shaped permanent magnets (ASMs). Our experimental results, which are in agreement with the analytical results, show that the cubic permanent magnet can safely be actuated for inclinations lower than 75° without having to make positional adjustments in the external magnetic system. We have found that with further inclinations, the cubic permanent magnet to be embedded in a drug delivery mechanism may stall. When it stalls, the external magnetic system's position and orientation would have to be adjusted to actuate the cubic permanent magnet and the drug release mechanism. This analysis of the transmitted torque is helpful for the development of real-time control strategies for magnetically articulated devices.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl capital
Patches, Planes and Probabilities: A Non-local Prior for Volumetric 3D Reconstruction

Ulusoy, A. O., Black, M. J., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.

avg ps

YouTube pdf poster suppmat Project Page [BibTex]

YouTube pdf poster suppmat Project Page [BibTex]


Thumb xl jun teaser
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

avg ps

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Thumb xl 07487340
Sperm-shaped magnetic microrobots: Fabrication using electrospinning, modeling, and characterization

Khalil, I. S., Tabak, A. F., Hosney, A., Mohamed, A., Klingner, A., Ghoneima, M., Sitti, M.

In Robotics and Automation (ICRA), 2016 IEEE International Conference on, pages: 1939-1944, May 2016 (inproceedings)

Abstract
We use electrospinning to fabricate sperm-shaped magnetic microrobots with a range of diameters from 50 μm to 500 μm. The variables of the electrospinning operation (voltage, concentration of the solution, dynamic viscosity, and distance between the syringe needle and collector) to achieve beading effect are determined. This beading effect allows us to fabricate microrobots with similar morphology to that of sperm cells. The bead and the ultra-fine fiber resemble the morphology of the head and tail of the sperm cell, respectively. We incorporate iron oxide nanoparticles to the head of the sperm-shaped microrobot to provide a magnetic dipole moment. This dipole enables directional control under the influence of external magnetic fields. We also apply weak (less than 2 mT) oscillating magnetic fields to exert a magnetic torque on the magnetic head, and generate planar flagellar waves and flagellated swim. The average speed of the sperm-shaped microrobot is calculated to be 0.5 body lengths per second and 1 body lengths per second at frequencies of 5 Hz and 10 Hz, respectively. We also develop a model of the microrobot using elastohydrodynamics approach and Timoshenko-Rayleigh beam theory, and find good agreement with the experimental results.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl teaser
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

avg ps

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]

2014


Thumb xl thumb schoenbein2014iros
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

2014


pdf DOI [BibTex]


Thumb xl publications toc
Geckogripper: A soft, inflatable robotic gripper using gecko-inspired elastomer micro-fiber adhesives

Song, S., Majidi, C., Sitti, M.

In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages: 4624-4629, September 2014 (inproceedings)

Abstract
This paper proposes GeckoGripper, a novel soft, inflatable gripper based on the controllable adhesion mechanism of gecko-inspired micro-fiber adhesives, to pick-and-place complex and fragile non-planar or planar parts serially or in parallel. Unlike previous fibrillar structures that use peel angle to control the manipulation of parts, we developed an elastomer micro-fiber adhesive that is fabricated on a soft, flexible membrane, increasing the adaptability to non-planar three-dimensional (3D) geometries and controllability in adhesion. The adhesive switching ratio (the ratio between the maximum and minimum adhesive forces) of the developed gripper was measured to be around 204, which is superior to previous works based on peel angle-based release control methods. Adhesion control mechanism based on the stretch of the membrane and superior adaptability to non-planar 3D geometries enable the micro-fibers to pick-and-place various 3D parts as shown in demonstrations.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl roser
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb xl schoenbein
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Three-dimensional robotic manipulation and transport of micro-scale objects by a magnetically driven capillary micro-gripper

Giltinan, J., Diller, E., Mayda, C., Sitti, M.

In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pages: 2077-2082, 2014 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Robotic assembly of hydrogels for tissue engineering and regenerative medicine

Tasoglu, S, Diller, E, Guven, S, Sitti, M, Demirci, U

In Journal of Tissue Engineering and Regenerative Medicine, 8, pages: 181-182, 2014 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Versatile non-contact micro-manipulation method using rotational flows locally induced by magnetic microrobots

Ye, Z., Edington, C., Russell, A. J., Sitti, M.

In Advanced Intelligent Mechatronics (AIM), 2014 IEEE/ASME International Conference on, pages: 26-31, 2014 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Structural optimization method towards synthesis of small scale flexure-based mobile grippers

Lum, G. Z., Diller, E., Sitti, M.

In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pages: 2339-2344, 2014 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Six-Degrees-of-Freedom Remote Actuation of Magnetic Microrobots.

Diller, E. D., Giltinan, J., Lum, G. Z., Ye, Z., Sitti, M.

In Robotics: Science and Systems, 2014 (inproceedings)

pi

[BibTex]

[BibTex]

2013


Thumb xl zhang
Understanding High-Level Semantics by Modeling Traffic Patterns

Zhang, H., Geiger, A., Urtasun, R.

In International Conference on Computer Vision, pages: 3056-3063, Sydney, Australia, December 2013 (inproceedings)

Abstract
In this paper, we are interested in understanding the semantics of outdoor scenes in the context of autonomous driving. Towards this goal, we propose a generative model of 3D urban scenes which is able to reason not only about the geometry and objects present in the scene, but also about the high-level semantics in the form of traffic patterns. We found that a small number of patterns is sufficient to model the vast majority of traffic scenes and show how these patterns can be learned. As evidenced by our experiments, this high-level reasoning significantly improves the overall scene estimation as well as the vehicle-to-lane association when compared to state-of-the-art approaches. All data and code will be made available upon publication.

avg ps

pdf [BibTex]

2013


pdf [BibTex]


Thumb xl lost
Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

(CVPR13 Best Paper Runner-Up)

Brubaker, M. A., Geiger, A., Urtasun, R.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2013), pages: 3057-3064, IEEE, Portland, OR, June 2013 (inproceedings)

Abstract
In this paper we propose an affordable solution to self- localization, which utilizes visual odometry and road maps as the only inputs. To this end, we present a probabilis- tic model as well as an efficient approximate inference al- gorithm, which is able to utilize distributed computation to meet the real-time requirements of autonomous systems. Because of the probabilistic nature of the model we are able to cope with uncertainty due to noisy visual odometry and inherent ambiguities in the map ( e.g ., in a Manhattan world). By exploiting freely available, community devel- oped maps and visual odometry measurements, we are able to localize a vehicle up to 3m after only a few seconds of driving on maps which contain more than 2,150km of driv- able roads.

avg ps

pdf supplementary project page [BibTex]

pdf supplementary project page [BibTex]


no image
Angular Motion Control Using a Closed-Loop CPG for a Water-Running Robot

Thatte, N., Khoramshahi, M., Ijspeert, A., Sitti, M.

In Dynamic Walking 2013, (EPFL-CONF-199763), 2013 (inproceedings)

pi

[BibTex]

[BibTex]


no image
A hybrid topological and structural optimization method to design a 3-DOF planar motion compliant mechanism

Lum, G. Z., Teo, T. J., Yang, G., Yeo, S. H., Sitti, M.

In Advanced Intelligent Mechatronics (AIM), 2013 IEEE/ASME International Conference on, pages: 247-254, 2013 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Light-induced microbubble poration of localized cells

Fan, Qihui, Hu, Wenqi, Ohta, Aaron T

In Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE, pages: 4482-4485, 2013 (inproceedings)

pi

[BibTex]

[BibTex]


no image
SoftCubes: towards a soft modular matter

Yim, S., Sitti, M.

In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages: 530-536, 2013 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Flapping wings via direct-driving by DC motors

Azhar, M., Campolo, D., Lau, G., Hines, L., Sitti, M.

In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages: 1397-1402, 2013 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Three dimensional independent control of multiple magnetic microrobots

Diller, E., Giltinan, J., Jena, P., Sitti, M.

In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages: 2576-2581, 2013 (inproceedings)

pi

[BibTex]

[BibTex]


no image
A Perching Mechanism for Flying Robots Using a Fibre-Based Adhesive

Daler, L., Klaptocz, A., Briod, A., Sitti, M., Floreano, D.

In Robotics and Automation (ICRA), 2013 IEEE International Conference on, 2013 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Bonding methods for modular micro-robotic assemblies

Diller, E., Zhang, N., Sitti, M.

In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages: 2588-2593, 2013 (inproceedings)

pi

[BibTex]

[BibTex]

2012


no image
Topological optimization for continuum compliant mechanisms via morphological evolution of traditional mechanisms

Lum, GZ, Yeo, SH, Yang, GL, Teo, TJ, Sitti, M

In 4th International Conference on Computational Methods, pages: 8, 2012 (inproceedings)

pi

[BibTex]

2012


[BibTex]


no image
Flapping Wings with DC-Motors via Direct, Elastic Transmissions

Azhar, M., Campolo, D., Lau, G., Sitti, M.

In Proceedings of International Conference on Intelligent Unmanned Systems, 8, 2012 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Investigation of bioinspired gecko fibers to improve adhesion of HeartLander surgical robot

Tortora, G., Glass, P., Wood, N., Aksak, B., Menciassi, A., Sitti, M., Riviere, C.

In Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, pages: 908-911, 2012 (inproceedings)

pi

[BibTex]

[BibTex]