Software & Data Downloads — SOCKET

SOurce-free Cross-modal KnowledgE Transfer for transfering knowledge from neural networks trained on a source sensor modality.

SOCKET allows transferring knowledge from neural networks trained on a source sensor modality (such as RGB) for one or more domains where large amount of annotated data may be available to an unannotated target dataset from a different sensor modality (such as infrared or depth). It makes use of task-irrelevant paired source-target images in order to promote feature alignment between the two modalities as well as distribution matching between the source batch norm features (mean and variance) and the target features.

    •  Ahmed, S.M., Lohit, S., Peng, K.-C., Jones, M.J., Roy Chowdhury, A.K., "Cross-Modal Knowledge Transfer Without Task-Relevant Source Data", European Conference on Computer Vision (ECCV), Avidan, S and Brostow, G and Cisse M and Farinella, G.M. and Hassner T., Eds., DOI: 10.1007/​978-3-031-19830-4_7, October 2022, pp. 111-127.
      BibTeX TR2022-135 PDF Video Software Presentation
      • @inproceedings{Ahmed2022oct,
      • author = {{Ahmed, Sk Miraj and Lohit, Suhas and Peng, Kuan-Chuan and Jones, Michael J. and Roy Chowdhury, Amit K.}},
      • title = {Cross-Modal Knowledge Transfer Without Task-Relevant Source Data},
      • booktitle = {European Conference on Computer Vision (ECCV)},
      • year = 2022,
      • editor = {Avidan, S and Brostow, G and Cisse M and Farinella, G.M. and Hassner T.},
      • pages = {111--127},
      • month = oct,
      • publisher = {Springer},
      • doi = {10.1007/978-3-031-19830-4_7},
      • isbn = {978-3-031-19830-4},
      • url = {https://www.merl.com/publications/TR2022-135}
      • }

    Access software at https://github.com/merlresearch/SOCKET.