Header logo is

Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning

2017

Article

am


Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.

Author(s): Wenbin Li and Jeannette Bohg and Mario Fritz
Journal: arXiv
Year: 2017
Month: November

Department(s): Autonomous Motion
Bibtex Type: Article (article)

State: Submitted

Links: arXiv

BibTex

@article{2018_ICML_lbf,
  title = {Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning},
  author = {Li, Wenbin and Bohg, Jeannette and Fritz, Mario},
  journal = {arXiv},
  month = nov,
  year = {2017},
  doi = {},
  month_numeric = {11}
}