TR2020-068
MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps
-
- "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR42600.2020.01140, June 2020, pp. 11382-11392.BibTeX TR2020-068 PDF Data Software
- @inproceedings{Wu2020jun,
- author = {Wu, Pengxiang and Chen, Siheng},
- title = {MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps},
- booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- year = 2020,
- pages = {11382--11392},
- month = jun,
- doi = {10.1109/CVPR42600.2020.01140},
- url = {https://www.merl.com/publications/TR2020-068}
- }
,
- "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR42600.2020.01140, June 2020, pp. 11382-11392.
-
Research Areas:
Abstract:
The ability to reliably perceive the environmental states,particularly the existence of objects and their motion behavior, is crucial for autonomous driving. In this work, we propose an efficient deep model, called MotionNet, to jointly perform perception and motion prediction from 3D point clouds. MotionNet takes a sequence of LiDAR sweeps as input and outputs a bird’s eye view (BEV) map, which encodes the object category and motion information in each grid cell. The backbone of MotionNet is a novel spatiotemporal pyramid network, which extracts deep spatial and temporal features in a hierarchical fashion. To enforce the smoothness of predictions over both space and time, the training of MotionNet is further regularized with novel spatial and temporal consistency losses. Extensive experiments show that the proposed method overall outperforms the state-of-the-arts, including the latest scene-flow- and 3D-object-detection-based methods. This indicates the potential value of the proposed method serving as a backup to the bounding-box-based system, and providing complementary information to the motion planner in autonomous driving
Software & Data Downloads
Related News & Events
-
NEWS MERL researchers presenting four papers and organizing two workshops at CVPR 2020 conference Date: June 14, 2020 - June 19, 2020
MERL Contacts: Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Tim K. Marks; Kuan-Chuan Peng; Ye Wang
Research Areas: Artificial Intelligence, Computer Vision, Machine LearningBrief- MERL researchers are presenting four papers (two oral papers and two posters) and organizing two workshops at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR 2020) conference.
CVPR 2020 Orals with MERL authors:
1. "Dynamic Multiscale Graph Neural Networks for 3D Skeleton Based Human Motion Prediction," by Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, Qi Tian
2. "Collaborative Motion Prediction via Neural Motion Message Passing," by Yue Hu, Siheng Chen, Ya Zhang, Xiao Gu
CVPR 2020 Posters with MERL authors:
3. "LUVLi Face Alignment: Estimating Landmarks’ Location, Uncertainty, and Visibility Likelihood," by Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Ye Wang, Michael Jones, Anoop Cherian, Toshiaki Koike-Akino, Xiaoming Liu, Chen Feng
4. "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps," by Pengxiang Wu, Siheng Chen, Dimitris N. Metaxas
CVPR 2020 Workshops co-organized by MERL researchers:
1. Fair, Data-Efficient and Trusted Computer Vision
2. Deep Declarative Networks.
- MERL researchers are presenting four papers (two oral papers and two posters) and organizing two workshops at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR 2020) conference.