Header logo is


2014


Thumb xl thumb schoenbein2014iros
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

2014


pdf DOI [BibTex]


Thumb xl roser
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb xl schoenbein
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb xl fig1
3D nanofabrication on complex seed shapes using glancing angle deposition

Hyeon-Ho, J., Mark, A. G., Gibbs, J. G., Reindl, T., Waizmann, U., Weis, J., Fischer, P.

In 2014 IEEE 27th International Conference on Micro Electro Mechanical Systems (MEMS), pages: 437-440, January 2014 (inproceedings)

Abstract
Three-dimensional (3D) fabrication techniques promise new device architectures and enable the integration of more components, but fabricating 3D nanostructures for device applications remains challenging. Recently, we have performed glancing angle deposition (GLAD) upon a nanoscale hexagonal seed array to create a variety of 3D nanoscale objects including multicomponent rods, helices, and zigzags [1]. Here, in an effort to generalize our technique, we present a step-by-step approach to grow 3D nanostructures on more complex nanoseed shapes and configurations than before. This approach allows us to create 3D nanostructures on nanoseeds regardless of seed sizes and shapes.

pf

DOI [BibTex]

DOI [BibTex]


Thumb xl toc image
Active Microrheology of the Vitreous of the Eye applied to Nanorobot Propulsion

Qiu, T., Schamel, D., Mark, A. G., Fischer, P.

In 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), pages: 3801-3806, IEEE International Conference on Robotics and Automation ICRA, 2014, Best Automation Paper Award – Finalist. (inproceedings)

Abstract
Biomedical applications of micro or nanorobots require active movement through complex biological fluids. These are generally non-Newtonian (viscoelastic) fluids that are characterized by complicated networks of macromolecules that have size-dependent rheological properties. It has been suggested that an untethered microrobot could assist in retinal surgical procedures. To do this it must navigate the vitreous humor, a hydrated double network of collagen fibrils and high molecular-weight, polyanionic hyaluronan macromolecules. Here, we examine the characteristic size that potential robots must have to traverse vitreous relatively unhindered. We have constructed magnetic tweezers that provide a large gradient of up to 320 T/m to pull sub-micron paramagnetic beads through biological fluids. A novel two-step electrical discharge machining (EDM) approach is used to construct the tips of the magnetic tweezers with a resolution of 30 mu m and high aspect ratio of similar to 17:1 that restricts the magnetic field gradient to the plane of observation. We report measurements on porcine vitreous. In agreement with structural data and passive Brownian diffusion studies we find that the unhindered active propulsion through the eye calls for nanorobots with cross-sections of less than 500 nm.

Best Automation Paper Award – Finalist.

pf

[BibTex]

[BibTex]