The Autonomous Motion Department has its focus on research in intelligent systems that can move, perceive, and learn from experiences.
We are interested in understanding, how autonomous movement systems can bootstrap themselves into competent behavior by starting from a relatively simple set of algorithms and pre-structuring, and then learning from interacting with the environment. Using instructions from a teacher to get started can add useful prior information. Performing trial and error learning to improve movement skills and perceptual skills is another domain of our research. We are interested in investigating such perception-action-learning loops in biological systems and robotic systems, which can range in scale from nano systems (cells, nano-robots) to macro systems (humans, and humanoid robots).
The problems studied in the department can be subsumed under the heading of empirical inference. This term refers to inference performed on the basis of empirical data.
The type of inference can vary, including for instance inductive learning (estimation of models such as functional dependencies that generalize to novel data sampled from the same underlying distribution), or the inference of causal structures from statistical data (leading to models that provide insight into the underlying mechanisms, and make predictions about the effect of interventions). Likewise, the type of empirical data can vary, ranging from sparse experimental measurements (e. g., microarray data) to visual patterns. Our department is conducting theoretical, algorithmic, and experimental studies to try and understand the problem of empirical inference.
We seek mathematical and computational models that formalize the principles of vision.
Light, reflected from surfaces, arriving the imaging plane of a camera, must be interpreted to be useful to a perceiving system. This interpretation is a process of inference from ambiguous and incomplete measurements using experience and knowledge. The Perceiving Systems Department is focused on uncovering the mathematical and computational principles underlying this process. This means understanding the statistics of the world (its shape, motion, material properties, etc.), modeling the imaging process (including optical blur, motion blur, noise, discretization), and devising algorithms to convert light measurements into information about the 3D structure and motion of the world.