TR2023-118
EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation
-
- "EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), DOI: 10.1109/IROS55552.2023.10341988, October 2023, pp. 2963-2970.BibTeX TR2023-118 PDF Video
- @inproceedings{Huang2023oct,
- author = {{Huang, Baichuan and Yu, Jingjin and Jain, Siddarth}},
- title = {EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation},
- booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
- year = 2023,
- pages = {2963--2970},
- month = oct,
- publisher = {IEEE},
- doi = {10.1109/IROS55552.2023.10341988},
- issn = {2153-0866},
- isbn = {978-1-6654-9190-7},
- url = {https://www.merl.com/publications/TR2023-118}
- }
,
- "EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), DOI: 10.1109/IROS55552.2023.10341988, October 2023, pp. 2963-2970.
-
MERL Contact:
-
Research Areas:
Abstract:
In this paper, we explore the dynamic grasping of moving objects through active pose tracking and reinforcement learning for hand-eye coordination systems. Most existing vision-based robotic grasping methods implicitly assume tar- get objects are stationary or moving predictably. Performing grasping of unpredictably moving objects presents a unique set of challenges. For example, a pre-computed robust grasp can become unreachable or unstable as the target object moves, and motion planning must also be adaptive. In this work, we present a new approach, Eye-on-hAnd Reinforcement Learner (EARL), for enabling coupled Eye-on-Hand (EoH) robotic manipulation systems to perform real-time active pose tracking and dynamic grasping of novel objects without explicit motion prediction. EARL readily addresses many thorny issues in automated hand-eye coordination, including fast-tracking of 6D object pose from vision, learning control policy for a robotic arm to track a moving object while keeping the object in the camera’s field of view, and performing dynamic grasping. We demonstrate the effectiveness of our approach in extensive experiments validated on multiple commercial robotic arms in both simulations and complex real-world tasks.
Related News & Events
-
NEWS Diego Romeres gave an invited talk at the Padua University's Seminar series on "AI in Action" Date: April 9, 2024
MERL Contact: Diego Romeres
Research Areas: Artificial Intelligence, Dynamical Systems, Machine Learning, Optimization, RoboticsBrief- Diego Romeres, Principal Research Scientist and Team Leader in the Optimization and Robotics Team, was invited to speak as a guest lecturer in the seminar series on "AI in Action" in the Department of Management and Engineering, at the University of Padua.
The talk, entitled "Machine Learning for Robotics and Automation" described MERL's recent research on machine learning and model-based reinforcement learning applied to robotics and automation.
- Diego Romeres, Principal Research Scientist and Team Leader in the Optimization and Robotics Team, was invited to speak as a guest lecturer in the seminar series on "AI in Action" in the Department of Management and Engineering, at the University of Padua.