I am currently building up my research group on Autonomous Learning. For that, I am looking for motivated PhD students and Postdocs. Details can be found on my website.
I am interested in autonomous learning, that is how an embodied agent can determine what to learn, how to learn, and how to judge the learning success. I am using information theory and dynamical systems theory to formulate generic intrinsic motivations that lead to coherent behavior exploration – much like playful behavior. Furthermore, I develop methods to quantify autonomous behavior in robots and animals using information theory and manifold learning. Recently, I am also working on machine learning methods particularly suitable for internal models and learning from sequences.
Before joining the MPI in Tübingen, I was postdoc fellow at the IST Austria in the groups of Christoph Lampert and Gašper Tkačik after being a postdoc at the Max Planck Institute for Mathematics in the Sciences in Leipzig.
Proceedings of the National Academy of Sciences, 112(45):E6224-E6232, 2015 (article)
Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher-level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive rhythmic behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system-specific modifications of the DEP rule. They rather arise from the underlying mechanism of spontaneous symmetry breaking, which is due to the tight brain body environment coupling. The new synaptic rule is biologically plausible and would be an interesting target for neurobiological investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.
One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviours. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviours, because a maximisation of the PI corresponds to an exploration of morphology- and environment-dependent behavioural regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost.
Advances in Complex Systems, 16(02n03):1350001, 2013 (article)
Self-organizing processes are crucial for the development of living beings. Practical applications in robots may benefit from the self-organization of behavior, e.g.~to increase fault tolerance and enhance flexibility, provided that external goals can also be achieved. We present results on the guidance of self-organizing control by visual target stimuli and show a remarkable robustness to sensorimotor disruptions. In a proof of concept study an autonomous wheeled robot is learning an object finding and ball-pushing task from scratch within a few minutes in continuous domains. The robustness is demonstrated by the rapid recovery of the performance after severe changes of the sensor configuration.
Autonomous robots may become our closest companions in the near future. While the technology for physically building such machines is already available today, a problem lies in the generation of the behavior for such complex machines. Nature proposes a solution: young children and higher animals learn to master their complex brain-body systems by playing. Can this be an option for robots? How can a machine be playful? The book provides answers by developing a general principle---homeokinesis, the dynamical symbiosis between brain, body, and environment---that is shown to drive robots to self-determined, individual development in a playful and obviously embodiment-related way: a dog-like robot starts playing with a barrier, eventually jumping or climbing over it; a snakebot develops coiling and jumping modes; humanoids develop climbing behaviors when fallen into a pit, or engage in wrestling-like scenarios when encountering an opponent. The book also develops guided self-organization, a new method that helps to make the playful machines fit for fulfilling tasks in the real world.
In Advances in Artificial Life, ECAL 2011, pages: 506-513, (Editors: Tom Lenaerts and Mario Giacobini and Hugues Bersini and Paul Bourgine and Marco Dorigo and René Doursat), MIT Press, 2011 (incollection)
Ideally, sensory information forms the only source of information to a robot. We consider an algorithm for the self-organization of a controller. At short timescales the controller is merely reactive but the parameter dynamics and the acquisition of knowledge by an internal model lead to seemingly purposeful behavior on longer timescales. As a paradigmatic example, we study the simulation of an underactuated snake-like robot. By interacting with the real physical system formed by the robotic hardware and the environment, the controller achieves a sensitive and body-specific actuation of the robot.
Dynamical systems offer intriguing possibilities as a substrate for the generation of behavior because of their rich behavioral complexity. However this complexity together with the largely covert relation between the parameters and the behavior of the agent is also the main hindrance in the goal-oriented design of a behavior system. This paper presents a general approach to the self-regulation of dynamical systems so that the design problem is circumvented. We consider the controller (a neural net work) as the mediator for changes in the sensor values over time and define a dynamics for the parameters of the controller by maximizing the dynamical complexity of the sensorimotor loop under the condition that the consequences of the actions taken are still predictable. This very general principle is given a concrete mathematical formulation and is implemented in an extremely robust and versatile algorithm for the parameter dynamics of the controller. We consider two different applications, a mechanical device called the rocking stamper and the ODE simulations of a "snake" with five degrees of freedom. In these and many other examples studied we observed various behavior modes of high dynamical complexity.
In Proc. From Animals to Animats 9, SAB 2006, 4095, pages: 406-421, LNCS, Springer, 2006 (inproceedings)
Self-organization and the phenomenon of emergence play an essential role in living systems and form a challenge to artificial life systems. This is not only because systems become more lifelike, but also since self-organization may help in reducing the design efforts in creating complex behavior systems. The present paper studies self-exploration based on a general approach to the self-organization of behavior, which has been developed and tested in various examples in recent years. This is a step towards autonomous early robot development. We consider agents under the close sensorimotor coupling paradigm with a certain cognitive ability realized by an internal forward model. Starting from tabula rasa initial conditions we overcome the bootstrapping problem and show emerging self-exploration. Apart from that, we analyze the effect of limited actions, which lead to deprivation of the world model. We show that our paradigm explicitly avoids this by producing purposive actions in a natural way. Examples are given using a simulated simple wheeled robot and a spherical robot driven by shifting internal masses.
In Computational Intelligence for Modelling, Control and Automation, CIMCA 2005 , 2, pages: 252-257, Washington, DC, USA, 2005 (inproceedings)
Despite the tremendous progress in robotic hardware and in both sensorial and computing efficiencies the performance of contemporary autonomous robots is still far below that of simple animals. This has triggered an intensive search for alternative approaches to the control of robots. The present paper exemplifies a general approach to the self-organization of behavior which has been developed and tested in various examples in recent years. We apply this approach to an underactuated snake like artifact with a complex physical behavior which is not known to the controller. Due to the weak forces available, the controller so to say has to develop a kind of feeling for the body which is seen to emerge from our approach in a natural way with meandering and rotational collective modes being observed in computer simulation experiments.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems