TR2020-063
Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
-
- "Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements", Robotics and Automation Letters, DOI: 10.1109/LRA.2020.2977255, Vol. 5, No. 2, pp. 3548-3555, May 2020.BibTeX TR2020-063 PDF
- @article{Romeres2020may,
- author = {Romeres, Diego and Dalla Libera, Alberto and Jha, Devesh K. and Yerazunis, William S. and Nikovski, Daniel N.},
- title = {Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements},
- journal = {Robotics and Automation Letters},
- year = 2020,
- volume = 5,
- number = 2,
- pages = {3548--3555},
- month = may,
- doi = {10.1109/LRA.2020.2977255},
- issn = {2377-3766},
- url = {https://www.merl.com/publications/TR2020-063}
- }
,
- "Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements", Robotics and Automation Letters, DOI: 10.1109/LRA.2020.2977255, Vol. 5, No. 2, pp. 3548-3555, May 2020.
-
MERL Contacts:
-
Research Area:
Abstract:
In this paper, we propose a derivative-free model learning framework for Reinforcement Learning (RL) algorithms based on Gaussian Process Regression (GPR). In many mechanical systems, only positions can be measured by the sensing instruments. Then, instead of representing the system state as suggested by the physics with a collection of positions, velocities, and accelerations, we define the state as the set of past position measurements. However, the equation of motions derived by physical first principles cannot be directly applied in this framework, being functions of velocities and accelerations. For this reason, we introduce a novel derivative-free physically-inspired kernel, which can be easily combined with nonparametric derivative-free Gaussian Process models. Tests performed on two real platforms show that the considered state definition combined with the proposed model improves estimation performance and data-efficiency w.r.t. traditional models based on GPR. Finally, we validate the proposed framework by solving two RL control problems for two real robotic systems.
Related News & Events
-
NEWS Diego Romeres gave an invited talk at the Autonomy Talks at ETH, Zurich. Date: February 15, 2021
Where: Virtual
MERL Contact: Diego Romeres
Research Areas: Artificial Intelligence, Machine Learning, RoboticsBrief- Diego Romeres, a Principal Research Scientist in MERL's Data Analytics group, gave the invited talk "Reinforcement Learning for Robotics" at the Autonomy Talks organized at ETH, Zurich. In the presentation, some directions to apply Model-based Reinforcement Learning algorithms to real-world applications are presented together with a novel MBRL algorithm called MC-PILCO. The link to the presentation is https://www.youtube.com/watch?v=wYgbgMa4j-s.
-
NEWS Diego Romeres gave an invited talk on modeling and control of physical systems at the MIT workshop "ICRAxMIT" Date: June 9, 2020
Where: ICRAxMIT
MERL Contact: Diego Romeres
Research Areas: Artificial Intelligence, Data Analytics, Dynamical Systems, Machine Learning, RoboticsBrief- Diego Romeres, a Principal Research Scientist in MERL's Data Analytics group, gave an invited talk at the workshop ICRAxMIT organized at MIT. The talk briefly described a derivative-free framework that doesn't take in consideration velocities and accelerations to model and control robotic systems. The proposed approach is validated in two real robotic systems.
Related Publication
- @article{Romeres2020feb,
- author = {Romeres, Diego and Dalla Libera, Alberto and Jha, Devesh K. and Yerazunis, William S. and Nikovski, Daniel N.},
- title = {Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements},
- journal = {arXiv},
- year = 2020,
- month = feb,
- doi = {10.1109/LRA.2020.2977255},
- issn = {2377-3766},
- url = {https://arxiv.org/abs/2002.10621}
- }