After receiving my diploma in electrical engineering from TU Dresden, I joined the Max Planck Institute for Intelligent Systems in 2017. My research focuses on control of cyber-physical systems. Cyber-physical systems are envisioned to integrate physical processes with computing and communication and to act autonomously in the real world, collaborating with each other and with humans. To realize this vision, I work on enabling provably stable and resource-efficient control of systems that are connected over (typically wireless) communication networks. As for systems that are meant to act autonomously in the real world foreseeing all possible situations they may face already at design time is not possible, I also investigate how we can leverage machine learning techniques to enhance their flexibility.
At the MPI, I am part of the Intelligent Control Systems Group and am supervised by Sebastian Trimpe. In addition, I am part of the Division of Decision and Control Systems at KTH Stockholm, where I work with Karl Henrik Johansson, who is the co-supervisor of my thesis.
Talks and Poster Presentations
Poster: "Distributed and event-based wireless control of cyber-physical systems", PhD School on Cyber-Physical Systems, Lucca, Italy, Jun. 2017.
Talk: "Distributed and event-based wireless control of cyber-physical systems", Seminar at KTH Royal Institute of Technology, Stockholm, Sweden, Aug. 2017.
Poster: "Learning to save communication", Max Planck ETH Workshop on Learning Control, Zürich, Switzerland, Feb. 2018.
Talk: "Fast and resource-efficient control of wireless cyber-physical systems", Reglermöte (Swedish Control Conference), Stochkolm, Sweden, Jun. 2018.
Poster: "Deep reinforcement learning for resource-aware control", Bosch Conference on Artificial Intelligence, Renningen, Germany, Nov. 2018.
Talk: "Feedback control goes wireless", GMA Meeting, Günzburg, Germany, Mar. 2019.
Talk: "Fast and resource-efficient control of wireless cyber-physical systems", GMA Meeting, Anif, Austria, Sep. 2019.
Poster: "Feedback control goes wireless", Digitalize in Stockholm, Stockholm, Sweden, Nov. 2019.
Talk: "Feedback control and causal identification for cyber-physical systems", Seminar at Uppsala University, Uppsala, Sweden, Dec. 2019.
Talk: "Control-guided communication: Efficient resource arbitration and allocation in multi-hop wireless control systems", IEEE Conference on Decision and Control, Nice, France, Dec. 2019.
Best paper award at the ACM/IEEE International Conference on Cyber-Physical Systems 2019.
Best demo award at the ACM/IEEE International Conference on Information Processing in Sensor Networks 2019.
Supervised Student Projects
Oleksandr Zlatov, "Deep reinforcement learning for resource-aware control", University of Tübingen.
The ability to learn is an essential aspect of future intelligent systems that are facing uncertain environments. However, the process of learning a new model or behavior often does not come for free, but involves a certain cost. For example, gathering informative data can be challenging due to physical limitations, or updating mode...
Future intelligent systems such as autonomous robots, self-driving cars, or manufacturing systems will be connected over communication networks. Facilitated by the network, the individual agents can coordinate their actions and thus achieve functionality exceeding the individual unit (for example, driving in formation or collaborati...
Cyber-physical systems (CPS) tightly integrate physical processes with computing and communication, thus, enabling emerging applications such as coordinated flight of autonomous vehicles or controlling factory automation machinery over wireless networks. The adoption of wireless technology offers unprecedented flexibility in sharing...
When learning to ride a bike, a child falls down a number of times before achieving the first success. As falling down usually has only mild consequences, it can be seen as a tolerable failure in exchange for a faster learning process, as it provides rich information about an undesired behavior. In the context of Bayesian optimization under unknown constraints (BOC), typical strategies for safe learning explore conservatively and avoid failures by all means. On the other side of the spectrum, non conservative BOC algorithms that allow failing may fail an unbounded number of times before reaching the optimum. In this work, we propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures. Empirical validation shows that our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods. In addition, we propose an original algorithm for unconstrained Bayesian optimization inspired by the notion of excursion sets in stochastic processes, upon which the failures-aware algorithm is built.
In Proceedings of the 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems, pages: 79-84, 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys), September 2019 (inproceedings)
IEEE Internet of Things Journal, 6(3):5013-5028, June 2019 (article)
The Internet of Things (IoT) interconnects multiple physical devices in large-scale networks. When the 'things' coordinate decisions and act collectively on shared information, feedback is introduced between them. Multiple feedback loops are thus closed over a shared, general-purpose network. Traditional feedback control is unsuitable for design of IoT control because it relies on high-rate periodic communication and is ignorant of the shared network resource. Therefore, recent event-based estimation methods are applied herein for resource-aware IoT control allowing agents to decide online whether communication with other agents is needed, or not. While this can reduce network traffic significantly, a severe limitation of typical event-based approaches is the need for instantaneous triggering decisions that leave no time to reallocate freed resources (e.g., communication slots), which hence remain unused. To address this problem, novel predictive and self triggering protocols are proposed herein. From a unified Bayesian decision framework, two schemes are developed: self triggers that predict, at the current triggering instant, the next one; and predictive triggers that check at every time step, whether communication will be needed at a given prediction horizon. The suitability of these triggers for feedback control is demonstrated in hardware experiments on a cart-pole, and scalability is discussed with a multi-vehicle simulation.
In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, pages: 97-108, 10th ACM/IEEE International Conference on Cyber-Physical Systems, April 2019 (inproceedings)
Closing feedback loops fast and over long distances is key to emerging applications; for example, robot motion control and swarm coordination require update intervals below 100 ms. Low-power wireless is preferred for its flexibility, low cost, and small form factor, especially if the devices support multi-hop communication. Thus far, however, closed-loop control over multi-hop low-power wireless has only been demonstrated for update intervals on the order of multiple seconds. This paper presents a wireless embedded system that tames imperfections impairing control performance such as jitter or packet loss, and a control design that exploits the essential properties of this system to provably guarantee closed-loop stability for linear dynamic systems. Using experiments on a testbed with multiple cart-pole systems, we are the first to demonstrate the feasibility and to assess the performance of closed-loop control and coordination over multi-hop low-power wireless for update intervals from 20 ms to 50 ms.
Proceedings of the 18th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), pages: 340-341, 18th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), April 2019 (poster)
In Proceedings of the IEEE Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench), pages: 13-18, IEEE Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench), April 2018 (inproceedings)
Learning robot controllers by minimizing a black-box objective cost using Bayesian optimization (BO) can be time-consuming and challenging. It is very often the case that some roll-outs result in failure behaviors, causing premature experiment detention. In such cases, the designer is forced to decide on heuristic cost penalties because the acquired data is often scarce, or not comparable with that of the stable policies. To overcome this, we propose a Bayesian model that captures exactly what we know about the cost of unstable controllers prior to data collection: Nothing, except that it should be a somewhat large number. The resulting Bayesian model, approximated with a Gaussian process, predicts high cost values in regions where failures are likely to occur. In this way, the model guides the BO exploration toward regions of stability. We demonstrate the benefits of the proposed model in several illustrative and statistical synthetic benchmarks, and also in experiments on a real robotic platform. In addition, we propose and experimentally validate a new BO method to account for unknown constraints. Such method is an extension of Max-Value Entropy Search, a recent information-theoretic method, to solve unconstrained global optimization problems.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems