What do forest fires, disease epidemics, robot swarms, and opinion dynamics have in common? The same modeling framework can be used to describe these spatial processes, as well as other applications. Therefore, developing appropriate estimation and control methods for this framework is critical to addressing natural disasters and other important phenomena in the future.
In his PhD work, Ravi Haksar is developing state estimation and control methods for the previously described modeling framework, called graph-based Markov decision processes (GMDPs). These methods either utitlize a single autonomous agent making decisions (centralized) or a group of cooperative autonomous agents working to complete a task (distributed). A major focus of this work is creating methods that do not rely on environment parameters or the total number of agents available so that the solution is scalable. In other words, changing the problem definition does not require re-computing an expensive solution.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems