TR2025-051
Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning
-
- "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning", International Conference on Learning Representations (ICLR), April 2025.BibTeX TR2025-051 PDF
- @inproceedings{Koike-Akino2025apr,
- author = {Koike-Akino, Toshiaki and Tonin,Francesco and Wu,Yongtao and Wu,Frank Zhengqing and Candogan,Leyla Naz and Cevher, Volkan},
- title = {{Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning}},
- booktitle = {International Conference on Learning Representations (ICLR)},
- year = 2025,
- month = apr,
- url = {https://www.merl.com/publications/TR2025-051}
- }
,
- "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning", International Conference on Learning Representations (ICLR), April 2025.
-
MERL Contact:
-
Research Areas:
Abstract:
This paper introduces Quantum-PEFT that leverages quantum computations for parameter-efficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter-efficient quantum unitary parameterization. With the use of Pauli parameterization, the number of trainable parameters grows only logarithmically with the ambient dimension, as opposed to linearly as in LoRA- based PEFT methods. Quantum-PEFT achieves vanishingly smaller number of trainable parameters than the lowest-rank LoRA as dimensions grow, enhancing parameter efficiency while maintaining a competitive performance. We apply Quantum-PEFT to several transfer learning benchmarks in language and vision, demonstrating significant advantages in parameter efficiency
Related Publication
- @article{Koike-Akino2025mar,
- author = {Koike-Akino, Toshiaki and Tonin, Francesco and Wu, Yongtao and Wu, Frank Zhengqing and Candogan, Leyla Naz and Volkan Cevher},
- title = {{Quantum-PEFT: Ultra parameter-efficient fine-tuning}},
- journal = {arXiv},
- year = 2025,
- month = mar,
- url = {https://arxiv.org/abs/2503.05431}
- }