TR2024-101
Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning
-
- "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning", International Conference on Machine Learning (ICML), July 2024.BibTeX TR2024-101 PDF Presentation
- @inproceedings{Koike-Akino2024jul,
- author = {{Koike-Akino, Toshiaki and Cevher, Volkan}},
- title = {Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning},
- booktitle = {International Conference on Machine Learning (ICML)},
- year = 2024,
- month = jul,
- url = {https://www.merl.com/publications/TR2024-101}
- }
,
- "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning", International Conference on Machine Learning (ICML), July 2024.
-
MERL Contact:
-
Research Areas:
Abstract:
This paper introduces Quantum-PEFT that leverages quantum computations for parameterefficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter-efficient quantum unitary parameterization with alternating entanglement. With the use of Pauli parameterization, the number of trainable parameters grows only logarithmically with the ambient dimension, as opposed to linearly as in LoRA-based PEFT methods. Consequently, Quantum-PEFT achieves vanishingly smaller number of trainable parameters than the lowest-rank LoRA as dimensions grow, enhancing parameter efficiency while maintaining a competitive performance. We apply Quantum-PEFT to several transfer learning benchmarks in language and vision, demonstrating significant advantages in parameter efficiency.