Header logo is


2014


no image
Haptic Robotization of Human Body via Data-Driven Vibrotactile Feedback

Kurihara, Y., Takei, S., Nakai, Y., Hachisu, T., Kuchenbecker, K. J., Kajimoto, H.

Entertainment Computing, 5(4):485-494, December 2014 (article)

hi

[BibTex]

2014


[BibTex]


Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Automatic Skill Evaluation for a Needle Passing Task in Robotic Surgery

Leung, S., Kuchenbecker, K. J.

In Proc. IROS Workshop on the Role of Human Sensorimotor Control in Robotic Surgery, Chicago, Illinois, sep 2014, Poster presentation given by Kuchenbecker. Best Poster Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
Modeling and Rendering Realistic Textures from Unconstrained Tool-Surface Interactions

Culbertson, H., Unwin, J., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 7(3):381-292, July 2014 (article)

hi

[BibTex]

[BibTex]


Optimizing Average Precision using Weakly Supervised Data
Optimizing Average Precision using Weakly Supervised Data

Behl, A., Jawahar, C. V., Kumar, M. P.

IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2014, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2014 (conference)

avg

[BibTex]

[BibTex]


no image
A Data-driven Approach to Remote Tactile Interaction: From a BioTac Sensor to Any Fingertip Cutaneous Device

Pacchierotti, C., Prattichizzo, D., Kuchenbecker, K. J.

In Haptics: Neuroscience, Devices, Modeling, and Applications, Proc. EuroHaptics, Part I, 8618, pages: 418-424, Lecture Notes in Computer Science, Springer-Verlag, Berlin Heidelberg, June 2014, Poster presentation given by Pacchierotti in Versailles, France (inproceedings)

hi

[BibTex]

[BibTex]


no image
Evaluating the BioTac’s Ability to Detect and Characterize Lumps in Simulated Tissue

Hui, J. C. T., Kuchenbecker, K. J.

In Haptics: Neuroscience, Devices, Modeling, and Applications, Proc. EuroHaptics, Part II, 8619, pages: 295-302, Lecture Notes in Computer Science, Springer-Verlag, Berlin Heidelberg, June 2014, Poster presentation given by Hui in Versailles, France (inproceedings)

hi

[BibTex]

[BibTex]


Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Teaching Forward and Inverse Kinematics of Robotic Manipulators Via MATLAB

Wong, D., Dames, P., J. Kuchenbecker, K.

June 2014, Presented at {\em ICRA Workshop on {MATLAB/Simulink} for Robotics Education and Research}. Oral presentation given by {Dames} and {Wong} (misc)

hi

[BibTex]

[BibTex]


Calibrating and Centering Quasi-Central Catadioptric Cameras
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


3D Traffic Scene Understanding from Movable Platforms
3D Traffic Scene Understanding from Movable Platforms

Geiger, A., Lauer, M., Wojek, C., Stiller, C., Urtasun, R.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(5):1012-1025, published, IEEE, Los Alamitos, CA, May 2014 (article)

Abstract
In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow and occupancy grids. For each of these cues we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

avg ps

pdf link (url) [BibTex]

pdf link (url) [BibTex]


no image
Analyzing Human High-Fives to Create an Effective High-Fiving Robot

Fitter, N. T., Kuchenbecker, K. J.

In Proc. ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 156-157, Bielefeld, Germany, March 2014, Poster presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
Dynamic Modeling and Control of Voice-Coil Actuators for High-Fidelity Display of Haptic Vibrations

McMahan, W., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 115-122, Houston, Texas, USA, February 2014, Oral presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
A Wearable Device for Controlling a Robot Gripper With Fingertip Contact, Pressure, Vibrotactile, and Grip Force Feedback

Pierce, R. M., Fedalei, E. A., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 19-25, Houston, Texas, USA, February 2014, Oral presentation given by Pierce (inproceedings)

hi

[BibTex]

[BibTex]


no image
Methods for Robotic Tool-Mediated Haptic Surface Recognition

Romano, J. M., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 49-56, Houston, Texas, USA, February 2014, Oral presentation given by Kuchenbecker. Finalist for Best Paper Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
Control of a Virtual Robot with Fingertip Contact, Pressure, Vibrotactile, and Grip Force Feedback

Pierce, R. M., Fedalei, E. A., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Houston, Texas, USA, February 2014 (misc)

hi

[BibTex]

[BibTex]


no image
One Hundred Data-Driven Haptic Texture Models and Open-Source Methods for Rendering on 3D Objects

Culbertson, H., Delgado, J. J. L., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 319-325, Houston, Texas, USA, February 2014, Poster presentation given by Culbertson. Finalist for Best Poster Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
A Modular Tactile Motion Guidance System

Kuchenbecker, K. J., Anon, A. M., Barkin, T., deVillafranca, K., Lo, M.

Hands-on demonstration presented at IEEE Haptics Symposium, Houston, Texas, USA, February 2014 (misc)

hi

[BibTex]

[BibTex]


no image
The Penn Haptic Texture Toolkit

Culbertson, H., Delgado, J. J. L., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Houston, Texas, USA, February 2014 (misc)

hi

[BibTex]

[BibTex]


no image
Rough Terrain Mapping and Navigation using a Continuously Rotating 2D Laser Scanner

Schadler, M., Stueckler, J., Behnke, S.

Künstliche Intelligenz (KI), 28(2):93-99, Springer, 2014 (article)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Adaptive Tool-Use Strategies for Anthropomorphic Service Robots

Stueckler, J., Behnke, S.

In Proc. of the 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2014 (inproceedings)

ev

link (url) [BibTex]

link (url) [BibTex]


Learning to Rank using High-Order Information
Learning to Rank using High-Order Information

Dokania, P. K., Behl, A., Jawahar, C. V., Kumar, M. P.

International Conference on Computer Vision, 2014 (conference)

avg

[BibTex]

[BibTex]


no image
Dense Real-Time Mapping of Object-Class Semantics from RGB-D Video

Stueckler, J., Waldvogel, B., Schulz, H., Behnke, S.

Journal of Real-Time Image Processing (JRTIP), 10(4):599-609, Springer, 2014 (article)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Local Multi-Resolution Surfel Grids for MAV Motion Estimation and 3D Mapping

Droeschel, D., Stueckler, J., Behnke, S.

In Proc. of the 13th International Conference on Intelligent Autonomous Systems (IAS), 2014 (inproceedings)

ev

link (url) [BibTex]

link (url) [BibTex]


no image
Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling and Tracking

Stueckler, J., Behnke, S.

Journal of Visual Communication and Image Representation (JVCI), 25(1):137-147, 2014 (article)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Active Recognition and Manipulation for Mobile Robot Bin Picking

Holz, D., Nieuwenhuisen, M., Droeschel, D., Stueckler, J., Berner, A., Li, J., Klein, R., Behnke, S.

In Gearing Up and Accelerating Cross-fertilization between Academic and Industrial Robotics Research in Europe: Technology Transfer Experiments from the ECHORD Project, pages: 133-153, Springer, 2014 (inbook)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Combining the Strengths of Sparse Interest Point and Dense Image Registration for RGB-D Odometry

Stueckler, J., Gutt, A., Behnke, S.

In Proc. of the Joint 45th International Symposium on Robotics (ISR) and 8th German Conference on Robotics (ROBOTIK), 2014 (inproceedings)

ev

link (url) [BibTex]

link (url) [BibTex]


no image
Cutaneous Feedback of Planar Fingertip Deformation and Vibration on a da Vinci Surgical Robot

Pacchierotti, C., Shirsat, P., Koehn, J. K., Prattichizzo, D., Kuchenbecker, K. J.

In Proc. IROS Workshop on the Role of Human Sensorimotor Control in Robotic Surgery, Chicago, Illinois, 2014, Poster presentation given by Koehn (inproceedings)

hi

[BibTex]

[BibTex]


no image
Increasing Flexibility of Mobile Manipulation and Intuitive Human-Robot Interaction in RoboCup@Home

Stueckler, J., Droeschel, D., Gräve, K., Holz, D., Schreiber, M., Topaldou-Kyniazopoulou, A., Schwarz, M., Behnke, S.

In RoboCup 2013, Robot Soccer World Cup XVII, pages: 135-146, Springer, 2014 (inbook)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Efficient Dense Registration, Segmentation, and Modeling Methods for RGB-D Environment Perception

Stueckler, J.

Faculty of Mathematics and Natural Sciences, University of Bonn, Germany, 2014 (phdthesis)

ev

link (url) [BibTex]

link (url) [BibTex]


no image
Mobile Teleoperation Interfaces with Adjustable Autonomy for Personal Service Robots

Schwarz, M., Stueckler, J., Behnke, S.

In Proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction, pages: 288-289, HRI ’14, ACM, 2014 (inproceedings)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Efficient deformable registration of multi-resolution surfel maps for object manipulation skill transfer

Stueckler, J., Behnke, S.

In Proc. of the IEEE International Conference on Robotics and Automation (ICRA), pages: 994-1001, May 2014 (inproceedings)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Local multi-resolution representation for 6D motion estimation and mapping with a continuously rotating 3D laser scanner

Droeschel, D., Stueckler, J., Behnke, S.

In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pages: 5221-5226, May 2014 (inproceedings)

ev

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2013


no image
A Practical System For Recording Instrument Interactions During Live Robotic Surgery

McMahan, W., Gomez, E. D., Chen, L., Bark, K., Nappo, J. C., Koch, E. I., Lee, D. I., Dumon, K., Williams, N., Kuchenbecker, K. J.

Journal of Robotic Surgery, 7(4):351-358, 2013 (article)

hi

[BibTex]

2013


[BibTex]


Understanding High-Level Semantics by Modeling Traffic Patterns
Understanding High-Level Semantics by Modeling Traffic Patterns

Zhang, H., Geiger, A., Urtasun, R.

In International Conference on Computer Vision, pages: 3056-3063, Sydney, Australia, December 2013 (inproceedings)

Abstract
In this paper, we are interested in understanding the semantics of outdoor scenes in the context of autonomous driving. Towards this goal, we propose a generative model of 3D urban scenes which is able to reason not only about the geometry and objects present in the scene, but also about the high-level semantics in the form of traffic patterns. We found that a small number of patterns is sufficient to model the vast majority of traffic scenes and show how these patterns can be learned. As evidenced by our experiments, this high-level reasoning significantly improves the overall scene estimation as well as the vehicle-to-lane association when compared to state-of-the-art approaches. All data and code will be made available upon publication.

avg ps

pdf [BibTex]

pdf [BibTex]


no image
Virtual Robotization of the Human Body via Data-Driven Vibrotactile Feedback

Kurihara, Y., Hachisu, T., Kuchenbecker, K. J., Kajimoto, H.

In Proc. International Conference on Advances in Computer Entertainment Technology (ACE), 8253, pages: 109-122, Lecture Notes in Computer Science, Springer, Enschede, Netherlands, 2013, Oral presentation given by Kurihara. Best Paper Silver Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
Jointonation: Robotization of the Human Body by Vibrotactile Feedback

Kurihara, Y., Hachisu, T., Kuchenbecker, K. J., Kajimoto, H.

Emerging Technologies Demonstration with Talk at ACM SIGGRAPH Asia, Hong Kong, November 2013, Hands-on demonstration given by Kurihara, Takei, and Nakai. Best Demonstration Award as voted by the Program Committee (misc)

hi

[BibTex]

[BibTex]


Vision meets Robotics: The {KITTI} Dataset
Vision meets Robotics: The KITTI Dataset

Geiger, A., Lenz, P., Stiller, C., Urtasun, R.

International Journal of Robotics Research, 32(11):1231 - 1237 , Sage Publishing, September 2013 (article)

Abstract
We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Vibrotactile Display: Perception, Technology, and Applications

Choi, S., Kuchenbecker, K. J.

Proceedings of the IEEE, 101(9):2093-2104, sep 2013 (article)

hi

[BibTex]

[BibTex]


no image
Virtual Robotization of the Human Body Using Vibration Recording, Modeling and Rendering

Kurihara, Y., Hachisu, T., Kuchenbecker, K. J., Kajimoto, H.

In Proc. Virtual Reality Society of Japan Annual Conference, Osaka, Japan, sep 2013, Paper written in Japanese. Presentation given by Kurihara (inproceedings)

hi

[BibTex]

[BibTex]


Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization
Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization

(CVPR13 Best Paper Runner-Up)

Brubaker, M. A., Geiger, A., Urtasun, R.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2013), pages: 3057-3064, IEEE, Portland, OR, June 2013 (inproceedings)

Abstract
In this paper we propose an affordable solution to self- localization, which utilizes visual odometry and road maps as the only inputs. To this end, we present a probabilis- tic model as well as an efficient approximate inference al- gorithm, which is able to utilize distributed computation to meet the real-time requirements of autonomous systems. Because of the probabilistic nature of the model we are able to cope with uncertainty due to noisy visual odometry and inherent ambiguities in the map ( e.g ., in a Manhattan world). By exploiting freely available, community devel- oped maps and visual odometry measurements, we are able to localize a vehicle up to 3m after only a few seconds of driving on maps which contain more than 2,150km of driv- able roads.

avg ps

pdf supplementary project page [BibTex]

pdf supplementary project page [BibTex]


no image
Virtual Alteration of Body Material by Reality-Based Periodic Vibrotactile Feedback

Kurihara, Y., Hachisu, T., Sato, M., Fukushima, S., Kuchenbecker, K. J., Kajimoto, H.

In Proc. JSME Robotics and Mechatronics Conference (ROBOMEC), Tsukuba, Japan, May 2013, Paper written in Japanese. Poster presentation given by {Kurihara} (inproceedings)

hi

[BibTex]

[BibTex]


no image
The Design and Field Observation of a Haptic Notification System for Oral Presentations

Tam, D., MacLean, K. E., McGrenere, J., Kuchenbecker, K. J.

In Proc. SIGCHI Conference on Human Factors in Computing Systems, pages: 1689-1698, Paris, France, May 2013, Oral presentation given by Tam (inproceedings)

hi

[BibTex]

[BibTex]


no image
Using Robotic Exploratory Procedures to Learn the Meaning of Haptic Adjectives

Chu, V., McMahon, I., Riano, L., McDonald, C. G., He, Q., Perez-Tejada, J. M., Arrigo, M., Fitter, N., Nappo, J., Darrell, T., Kuchenbecker, K. J.

In Proc. IEEE International Conference on Robotics and Automation, pages: 3048-3055, Karlsruhe, Germany, May 2013, Oral presentation given by Chu. Best Cognitive Robotics Paper Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
Instrument contact vibrations are a construct-valid measure of technical skill in Fundamentals of Laparoscopic Surgery Training Tasks

Gomez, E. D., Aggarwal, R., McMahan, W., Koch, E., Hashimoto, D. A., Darzi, A., Murayama, K. M., Dumon, K. R., Williams, N. N., Kuchenbecker, K. J.

In Proc. Annual Meeting of the Association for Surgical Education, Orlando, Florida, USA, 2013, Oral presentation given by Gomez (inproceedings)

hi

[BibTex]

[BibTex]


no image
Dynamic Simulation of Tool-Mediated Texture Interaction

McDonald, C. G., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, pages: 307-312, Daejeon, South Korea, April 2013, Oral presentation given by McDonald (inproceedings)

hi

[BibTex]

[BibTex]


no image
ROS Open-source Audio Recognizer: ROAR Environmental Sound Detection Tools for Robot Programming

Romano, J. M., Brindza, J. P., Kuchenbecker, K. J.

Autonomous Robots, 34(3):207-215, April 2013 (article)

hi

[BibTex]

[BibTex]


Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms
Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms

Geiger, A.

Karlsruhe Institute of Technology, Karlsruhe Institute of Technology, April 2013 (phdthesis)

Abstract
Visual 3D scene understanding is an important component in autonomous driving and robot navigation. Intelligent vehicles for example often base their decisions on observations obtained from video cameras as they are cheap and easy to employ. Inner-city intersections represent an interesting but also very challenging scenario in this context: The road layout may be very complex and observations are often noisy or even missing due to heavy occlusions. While Highway navigation and autonomous driving on simple and annotated intersections have already been demonstrated successfully, understanding and navigating general inner-city crossings with little prior knowledge remains an unsolved problem. This thesis is a contribution to understanding multi-object traffic scenes from video sequences. All data is provided by a camera system which is mounted on top of the autonomous driving platform AnnieWAY. The proposed probabilistic generative model reasons jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, the scene topology, geometry as well as traffic activities are inferred from short video sequences. The model takes advantage of monocular information in the form of vehicle tracklets, vanishing lines and semantic labels. Additionally, the benefit of stereo features such as 3D scene flow and occupancy grids is investigated. Motivated by the impressive driving capabilities of humans, no further information such as GPS, lidar, radar or map knowledge is required. Experiments conducted on 113 representative intersection sequences show that the developed approach successfully infers the correct layout in a variety of difficult scenarios. To evaluate the importance of each feature cue, experiments with different feature combinations are conducted. Additionally, the proposed method is shown to improve object detection and object orientation estimation performance.

avg ps

pdf [BibTex]

pdf [BibTex]


no image
Generating Haptic Texture Models From Unconstrained Tool-Surface Interactions

Culbertson, H., Unwin, J., Goodman, B. E., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, pages: 295-300, Daejeon, South Korea, April 2013, Oral presentation given by Culbertson. Finalist for Best Paper Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
Data-Driven Modeling and Rendering of Isotropic Textures

Culbertson, H., McDonald, C. G., Goodman, B. E., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE World Haptics Conference, Daejeon, South Korea, April 2013, Best Demonstration Award (by audience vote) (misc)

hi

[BibTex]

[BibTex]