TR2024-104

Efficient Differentially Private Fine-Tuning of Diffusion Models


Abstract:

The recent developments of Diffusion Models (DMs) enable generation of astonishingly high- quality synthetic samples. Recent work showed that the synthetic samples generated by the dif- fusion model, which is pre-trained on public data and fully fine-tuned with differential privacy on private data, can train a downstream classifier, while achieving a good privacy-utility trade- off. However, fully fine-tuning such large diffusion models with DP-SGD can be very resource- demanding in terms of memory usage and computation. In this work, we investigate Parameter- Efficient Fine-Tuning (PEFT) of diffusion models using Low-Dimensional Adaptation (LoDA) with Differential Privacy. We evaluate the proposed method with the MNIST and CIFAR-10 datasets and demonstrate that such efficient fine-tuning can also generate useful synthetic samples for training downstream classifiers, with guaranteed privacy protection of fine-tuning data. Our source code will be made available on GitHub.

 

  • Related Research Highlights

  • Related Publication

  •  Liu, J., Lowy, A., Koike-Akino, T., Parsons, K., Wang, Y., "Efficient Differentially Private Fine-Tuning of Diffusion Models", arXiv, June 2024.
    BibTeX arXiv
    • @article{Liu2024jun,
    • author = {Liu, Jing and Lowy, Andrew and Koike-Akino, Toshiaki and Parsons, Kieran and Wang, Ye}},
    • title = {Efficient Differentially Private Fine-Tuning of Diffusion Models},
    • journal = {arXiv},
    • year = 2024,
    • month = jun,
    • url = {https://arxiv.org/abs/2406.05257}
    • }