TR2024-170
Smoothed Embeddings for Robust Language Models
-
- "Smoothed Embeddings for Robust Language Models", Safe Generative AI Workshop at Advances in Neural Information Processing Systems (NeurIPS), December 2024.BibTeX TR2024-170 PDF Presentation
- @inproceedings{Ryo2024dec,
- author = {{Ryo, Hase and Rashid, Md Rafi Ur and Lewis, Ashley and Liu, Jing and Koike-Akino, Toshiaki and Parsons, Kieran and Wang, Ye}},
- title = {Smoothed Embeddings for Robust Language Models},
- booktitle = {Safe Generative AI Workshop at Advances in Neural Information Processing Systems (NeurIPS)},
- year = 2024,
- month = dec,
- publisher = {OpenReview},
- url = {https://www.merl.com/publications/TR2024-170}
- }
,
- "Smoothed Embeddings for Robust Language Models", Safe Generative AI Workshop at Advances in Neural Information Processing Systems (NeurIPS), December 2024.
-
MERL Contacts:
-
Research Areas:
Abstract:
Improving the safety and reliability of large language models (LLMs) is a crucial aspect of realizing trustworthy AI systems. Although alignment methods aim to suppress harmful content generation, LLMs are often still vulnerable to jail- breaking attacks that employ adversarial inputs that subvert alignment and induce harmful outputs. We propose the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense, which adds random noise to the embedding vectors and performs aggregation during the generation of each output token, with the aim of better preserving semantic information. Our experiments demonstrate that our approach achieves superior robustness versus utility tradeoffs compared to the baseline defenses.