Jim Mainprice is a Postdoctoral fellow in the Autonomous Motion Department at the Max Planck Institute for Intelligent Systems (Tübingen, Germany) since January 2015. He received his M.S. from Polytech' Montpellier, France, and his Ph.D. in robotics and computer science from the University of Toulouse, France, in 2009 and 2012 respectively. His research interests include motion planning, task planning, machine learning, human-robot collaboration and human-robot interaction. While completing his Ph.D. at LAAS-CNRS, he took part in the European community's 7th framework program projects Dexmart (DEXterous and autonomous dual-arm/hand robotic manipulation with sMART sensory-motor skills: a bridge from natural to artificial cognition) and Saphari (Safe and Autonomous Physical Human-Aware Robot Interaction). He was a postdoctoral researcher at the Autonomous Robotic Collaboration Lab, at the Worcester Polytechnic Institute (WPI) from Jan. 2013 to Dec. 2014 where he participated in the DARPA Robotic Challenge as a member of the DRCHubo team.
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems