TR2024-130

Equivariant Spatio-Temporal Self-Supervision for LiDAR Object Detection


    •  Hegde, D., Lohit, S., Peng, K.-C., Jones, M.J., Patel, V.M., "Equivariant Spatio-Temporal Self-Supervision for LiDAR Object Detection", European Conference on Computer Vision (ECCV), Leonardis, A. and Ricci, E. and Roth, S. and Russakovsky, O. and Sattler, T. and Varol, G., Eds., DOI: 10.1007/​978-3-031-73347-5_27, September 2024, pp. 475-491.
      BibTeX TR2024-130 PDF Video Presentation
      • @inproceedings{Hegde2024sep,
      • author = {{Hegde, Deepti and Lohit, Suhas and Peng, Kuan-Chuan and Jones, Michael J. and Patel, Vishal M.}},
      • title = {Equivariant Spatio-Temporal Self-Supervision for LiDAR Object Detection},
      • booktitle = {European Conference on Computer Vision (ECCV)},
      • year = 2024,
      • editor = {Leonardis, A. and Ricci, E. and Roth, S. and Russakovsky, O. and Sattler, T. and Varol, G.},
      • pages = {475--491},
      • month = sep,
      • publisher = {Springer},
      • doi = {10.1007/978-3-031-73347-5_27},
      • issn = {0302-9743},
      • isbn = {978-3-031-73346-8},
      • url = {https://www.merl.com/publications/TR2024-130}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

Abstract:

Popular representation learning methods encourage feature invariance under transformations applied at the input. However, in 3D perception tasks like object localization and segmentation, outputs are naturally equivariant to some transformations, such as rotation. Using pre-training loss functions that encourage equivariance of features under certain transformations provides a strong self-supervision signal while also retaining information of geometric relationships between trans- formed feature representations. This can enable improved performance in downstream tasks that are equivariant to such transformations. In this paper, we propose a spatio-temporal equivariant learning framework by considering both spatial and temporal augmentations jointly. Our experiments show that the best performance arises with a pre-training approach that encourages equivariance to translation, scaling, and flip, rotation and scene flow. For spatial augmentations, we find that depending on the transformation, either a contrastive objective or an equivariance-by- classification objective yields best results. To leverage real-world object deformations and motion, we consider sequential LiDAR scene pairs and develop a novel 3D scene flow-based equivariance objective that leads to improved performance overall. We show that our pre-training method for 3D object detection outperforms existing equivariant and invariant approaches in many settings.

 

  • Related Video

  • Related Publication

  •  Hegde, D., Lohit, S., Peng, K.-C., Jones, M.J., Patel, V.M., "Equivariant Spatio-Temporal Self-Supervision for LiDAR Object Detection", arXiv, April 2024.
    BibTeX arXiv
    • @article{Hegde2024apr2,
    • author = {Hegde, Deepti and Lohit, Suhas and Peng, Kuan-Chuan and Jones, Michael J. and Patel, Vishal M.},
    • title = {Equivariant Spatio-Temporal Self-Supervision for LiDAR Object Detection},
    • journal = {arXiv},
    • year = 2024,
    • month = apr,
    • url = {https://arxiv.org/abs/2404.11737}
    • }