Software & Data Downloads — PixPNet
Pixel-Grounded Prototypical Part Networks for reproducing the results presented in the paper "Pixel-Grounded Prototypical Part Networks" which was submitted to WACV 2024 and accepted.
This repository contains the code for the paper, Pixel-Grounded Prototypical Part Networks by Zachariah Carmichael, Suhas Lohit, Anoop Cherian, Michael Jones, and Walter J Scheirer. PixPNet (Pixel-Grounded Prototypical Part Network) is an improvement upon existing prototypical part neural networks (ProtoPartNNs): PixPNet truly localizes to object parts (unlike other approaches, including ProtoPNet), has quantitatively better interpretability, and is competitive on image classification benchmarks.
Prototypical part neural networks (ProtoPartNNs), namely ProtoPNet and its derivatives, are an intrinsically interpretable approach to machine learning. Their prototype learning scheme enables intuitive explanations of the form, this (prototype) looks like that (testing image patch). But, does this actually look like that? In this work, we delve into why object part localization and associated heat maps in past work are misleading. Rather than localizing to object parts, existing ProtoPartNNs localize to the entire image, contrary to generated explanatory visualizations. We argue that detraction from these underlying issues is due to the alluring nature of visualizations and an over-reliance on intuition. To alleviate these issues, we devise new receptive field-based architectural constraints for meaningful localization and a principled pixel space mapping for ProtoPartNNs. To improve interpretability, we propose additional architectural improvements, including a simplified classification head. We also make additional corrections to ProtoPNet and its derivatives, such as the use of a validation set, rather than a test set, to evaluate generalization during training. Our approach, PixPNet (Pixel-grounded Prototypical part Network), is the only ProtoPartNN that truly learns and localizes to prototypical object parts. We demonstrate that PixPNet achieves quantifiably improved interpretability without sacrificing accuracy.
-
Related Publications
- "Pixel-Grounded Prototypical Part Networks", IEEE Winter Conference on Applications of Computer Vision (WACV), DOI: 10.1109/WACV57701.2024.00470, January 2024.
,BibTeX TR2024-002 PDF Video Software Presentation- @inproceedings{Carmichael2024jan,
- author = {Carmichael, Zachariah and Jones, Lohit, Suhas and Cherian, Anoop and Michael J. and Scheirer, Walter},
- title = {Pixel-Grounded Prototypical Part Networks},
- booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
- year = 2024,
- month = jan,
- doi = {10.1109/WACV57701.2024.00470},
- url = {https://www.merl.com/publications/TR2024-002}
- }
- "Pixel-Grounded Prototypical Part Networks", IEEE Winter Conference on Applications of Computer Vision (WACV), DOI: 10.1109/WACV57701.2024.00470, January 2024.
Software & Data Downloads
Access software at https://github.com/merlresearch/PixPNet.