TR2024-178

A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations


    •  Ozcan, E.C., Giammarino, V., Queeney, J., Paschalidis, I.C., "A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations", IEEE Conference on Decision and Control (CDC), December 2024.
      BibTeX TR2024-178 PDF
      • @inproceedings{Ozcan2024dec,
      • author = {Ozcan, Erhan Can and Giammarino, Vittorio and Queeney, James and Paschalidis, Ioannis Ch.}},
      • title = {A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations},
      • booktitle = {IEEE Conference on Decision and Control (CDC)},
      • year = 2024,
      • month = dec,
      • url = {https://www.merl.com/publications/TR2024-178}
      • }
  • MERL Contact:
  • Research Areas:

    Dynamical Systems, Machine Learning, Signal Processing

Abstract:

This paper investigates how to incorporate expert observations (without explicit information on expert actions) into a deep reinforcement learning setting to improve sample efficiency. First, we formulate an augmented policy loss combining a maximum entropy reinforcement learning objective with a behavioral cloning loss that leverages a forward dynamics model. Then, we propose an algorithm that automatically adjusts the weights of each component in the augmented loss function. Experiments on a variety of continuous control tasks demonstrate that the proposed algorithm outperforms various benchmarks by effectively utilizing available expert observations.