For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies.
Currently, digital models typically lack realistic soft tissue and clothing or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data. We then use such models to extract information out of incomplete and noisy sensor data from monocular video, depth or IMUs.
I will give an overview of a selection of projects conducted in Perceiving Systems in which we build realistic models of human pose, shape, soft-tissue and clothing. I will also present some of our recent work on 3D reconstruction of people models from monocular video, real-time fusion and online human body shape estimation from depth data and recovery of human pose in the wild from video and IMUs. I will conclude the talk outlining the next challenges in building digital humans and perceiving them from sensory data.
Biography: Gerard Pons-Moll is a group leader at Max Planck Institute for Informatics, heading the group "Real Virtual Humans". His research is at the intersection of vision, learning and graphics. Gerard's group at MPI-I focuses on 3D vision, capture and modelling of people in clothing. Before joining MPI-I, Gerard also worked as a Research Scientist at Max Planck Institute for Intelligent Systems' Perceiving Systems Department where he continues as an affiliated researcher.