A dual arm robotic platform performing a tire change. The relevant objects for this scenario are detected and tracked using a head-mounted Kinect camera. The objects involved here are the tire and the impact wrench (in green), the stand of the tire (in blue) and of course the hands and arms of the robot.
Cristina Garcia Cifuentes
For manipulation of objects it is crucial to know the manipulator's and the objects' locations. These locations have to be inferred from sensory data. In this project we work with range sensors, which are wide spread in robotics and provide dense depth images.
The objective is to continuously infer the 6-DoF poses of all objects involved, given the incoming depth images. This includes objects the robot is interacting with, as well as the links of its own manipulators. This problem poses a number of challenges that are difficult to address with standard Bayesian filtering methods:
- The measurement, i.e. the dense depth image, is high-dimensional. We therefore investigate how approximate inference can be performed efficiently, e.g. by imposing factorization in the pixels.
- The measurement process is very noisy. We are working on robustification of Kalman Filtering methods. [ ]
- Occlusions of objects are pervasive in the context of manipulation. We developed a model of the depth image generation which takes occlusion explicitly into account which proved to greatly improve robustness. [ ]
- The state is high dimensional if many objects are involved, or if the robot has many joints. Therefore, we have worked on an extension of the Particle Filter which scales better with the dimensionality of the state space, for certain dynamical systems. [ ]
Extending existing estimation methods to address these challenges is an interesting research topic in its own right. Furthermore, the algorithms developed provide a basis for research on robotic manipulation.
The Coordinate Particle Filter:
Probabilistic Object Tracking using a Depth Camera: