The Autonomous Motion Department has its focus on research in intelligent systems that can move, perceive, and learn from experiences.
We are interested in understanding, how autonomous movement systems can bootstrap themselves into competent behavior by starting from a relatively simple set of algorithms and pre-structuring, and then learning from interacting with the environment. Using instructions from a teacher to get started can add useful prior information. Performing trial and error learning to improve movement skills and perceptual skills is another domain of our research. We are interested in investigating such perception-action-learning loops in biological systems and robotic systems, which can range in scale from nano systems (cells, nano-robots) to macro systems (humans, and humanoid robots).
The problems studied in the department can be subsumed under the heading of empirical inference. This term refers to inference performed on the basis of empirical data.
The type of inference can vary, including for instance inductive learning (estimation of models such as functional dependencies that generalize to novel data sampled from the same underlying distribution), or the inference of causal structures from statistical data (leading to models that provide insight into the underlying mechanisms, and make predictions about the effect of interventions). Likewise, the type of empirical data can vary, ranging from sparse experimental measurements (e. g., microarray data) to visual patterns. Our department is conducting theoretical, algorithmic, and experimental studies to try and understand the problem of empirical inference.
We study touch-based interaction, invent new haptic interfaces, and play with robots.
Have you noticed that computers can show beautiful images and play clear sounds, but they don't let you physically touch digital items? Similarly, most robots are surprisingly unskilled at physically interacting with the real world and with people.
Led by Katherine J. Kuchenbecker, the MPI-IS Haptic Intelligence department aims to elevate and formalize our understanding of touch cues while simultaneously discovering new opportunities for their use in interactions between humans, computers, and machines.
We leverage scientific knowledge about the sense of touch to create haptic interfaces that enable a user to interact with virtual objects and distant environments as though they were real and within reach. One key insight in this endeavor has been that tactile cues, such as high-frequency tool vibrations and the making and breaking of contact, convey rich mechanical information that is necessary to make the interaction feel real. This research led us to realize that autonomous robots can also benefit from attending to the dynamic tactile cues that occur as they manipulate objects in their environment and engage in social physical interaction with humans.
We seek mathematical and computational models that formalize the principles of vision.
Light, reflected from surfaces, arriving the imaging plane of a camera, must be interpreted to be useful to a perceiving system. This interpretation is a process of inference from ambiguous and incomplete measurements using experience and knowledge. The Perceiving Systems Department is focused on uncovering the mathematical and computational principles underlying this process. This means understanding the statistics of the world (its shape, motion, material properties, etc.), modeling the imaging process (including optical blur, motion blur, noise, discretization), and devising algorithms to convert light measurements into information about the 3D structure and motion of the world.
The researchers working in the Physical Intelligence Department aim to understand the principles of design, locomotion, control, perception, and learning of small-scale mobile robots.
The Physical Intelligence Department, founded by Prof. Metin Sitti, started its research activities in the fall of 2014 at the Max Planck Institute for Intelligent Systems. Our department aims to understand the principles of design, locomotion, control, perception, and learning of single and large numbers of small-scale mobile robots made of smart and soft materials as our physical intelligence platforms. The intelligence of such robots would dominantly come from their physical design, materials, and control more than, or in addition to, their computational perception, learning, and control. Such physical intelligence methods are indispensable at the small scale, especially since small-scale mobile robots are inherently limited in computation, actuation, powering, perception, and control capabilities.