TR2024-158

Memory-Based Learning of Global Control Policies from Local Controllers


    •  Nikovski, D.N., Zhong, J., Yerazunis, W.S., "Memory-Based Learning of Global Control Policies from Local Controllers", 21st International Conference on Informatics in Control, Automation and Robotics (ICINCO'24), November 2024.
      BibTeX TR2024-158 PDF
      • @inproceedings{Nikovski2024nov,
      • author = {{Nikovski, Daniel N. and Zhong, Junmin and Yerazunis, William S.}},
      • title = {Memory-Based Learning of Global Control Policies from Local Controllers},
      • booktitle = {21st International Conference on Informatics in Control, Automation and Robotics (ICINCO'24)},
      • year = 2024,
      • month = nov,
      • url = {https://www.merl.com/publications/TR2024-158}
      • }
  • MERL Contacts:
  • Research Areas:

    Control, Dynamical Systems, Robotics

Abstract:

The paper proposes a novel method for constructing a global control policy, valid everywhere in the state space of a dynamical system, from a set of solutions computed for specific initial states in that space by means of differential dynamic programming. The global controller chooses controls based on elements of the pre- computed solutions, leveraging the property that these solutions compute not only nominal state and control trajectories from the initial states, but also a set of linear controllers that can stabilize the system around the nominal trajectories, as well as a set of localized estimators of the optimal cost-to-go for system states around the nominal states. An empirical verification of three variants of the algorithm on two benchmark problems demonstrates that making use of the cost-to-go estimators results in the best performance (lowest average cost) and often leads to dramatic reduction in the number of pre-computed solutions that have to be stored in memory, which in its turn speeds up control computation in real time.