TR2019-146
Inverse Learning for Human-Adaptive Motion Planning
-
- "Inverse Learning for Human-Adaptive Motion Planning", IEEE Conference on Decision and Control (CDC), DOI: 10.1109/CDC40024.2019.9030020, December 2019.BibTeX TR2019-146 PDF
- @inproceedings{Menner2019dec,
- author = {Menner, Marcel and Berntorp, Karl and Di Cairano, Stefano},
- title = {Inverse Learning for Human-Adaptive Motion Planning},
- booktitle = {IEEE Conference on Decision and Control (CDC)},
- year = 2019,
- month = dec,
- doi = {10.1109/CDC40024.2019.9030020},
- url = {https://www.merl.com/publications/TR2019-146}
- }
,
- "Inverse Learning for Human-Adaptive Motion Planning", IEEE Conference on Decision and Control (CDC), DOI: 10.1109/CDC40024.2019.9030020, December 2019.
-
MERL Contact:
-
Research Areas:
Abstract:
This paper presents a method for inverse learning of a control objective defined in terms of requirements and their probability distribution. The probability distribution characterizes tolerated deviations from the deterministic requirements, is modeled as Gaussian, and learned from data using likelihood maximization. Further, this paper introduces both parametrized requirements for motion planning in autonomous driving applications and methods for their estimation from demonstrations. Human-in-the-loop simulations with four drivers suggest that human motion planning can be modeled with the considered probabilistic control objective and the inverse learning methods in this paper enable more natural and personalized automated driving.
Related News & Events
-
NEWS MERL researchers presented 8 papers at Conference on Decision and Control (CDC) Date: December 11, 2019 - December 13, 2019
Where: Nice, France
MERL Contacts: Mouhacine Benosman; Scott A. Bortoff; Ankush Chakrabarty; Stefano Di Cairano
Research Areas: Control, Machine Learning, OptimizationBrief- At the Conference on Decision and Control, MERL presented 8 papers on subjects including estimation for thermal-fluid models and transportation networks, analysis of HVAC systems, extremum seeking for multi-agent systems, reinforcement learning for vehicle platoons, and learning with applications to autonomous vehicles.