TR2024-104
Efficient Differentially Private Fine-Tuning of Diffusion Models
-
- "Efficient Differentially Private Fine-Tuning of Diffusion Models", International Conference on Machine Learning (ICML) workshop (Next Generation of AI Safety), July 2024.BibTeX TR2024-104 PDF
- @inproceedings{Liu2024jul,
- author = {Liu, Jing and Lowy, Andrew and Koike-Akino, Toshiaki and Parsons, Kieran and Wang, Ye}},
- title = {Efficient Differentially Private Fine-Tuning of Diffusion Models},
- booktitle = {International Conference on Machine Learning (ICML) workshop (Next Generation of AI Safety)},
- year = 2024,
- month = jul,
- url = {https://www.merl.com/publications/TR2024-104}
- }
,
- "Efficient Differentially Private Fine-Tuning of Diffusion Models", International Conference on Machine Learning (ICML) workshop (Next Generation of AI Safety), July 2024.
-
MERL Contacts:
-
Research Areas:
Abstract:
The recent developments of Diffusion Models (DMs) enable generation of astonishingly high- quality synthetic samples. Recent work showed that the synthetic samples generated by the dif- fusion model, which is pre-trained on public data and fully fine-tuned with differential privacy on private data, can train a downstream classifier, while achieving a good privacy-utility trade- off. However, fully fine-tuning such large diffusion models with DP-SGD can be very resource- demanding in terms of memory usage and computation. In this work, we investigate Parameter- Efficient Fine-Tuning (PEFT) of diffusion models using Low-Dimensional Adaptation (LoDA) with Differential Privacy. We evaluate the proposed method with the MNIST and CIFAR-10 datasets and demonstrate that such efficient fine-tuning can also generate useful synthetic samples for training downstream classifiers, with guaranteed privacy protection of fine-tuning data. Our source code will be made available on GitHub.