Privacy in Large Language Models: Attacks, Defenses and Future Directions

PhD Qualifying Examination


Title: "Privacy in Large Language Models: Attacks, Defenses and Future 
Directions"

by

Mr. Haoran LI


Abstract:

The advancement of large language models (LLMs) has significantly enhanced the 
ability to effectively tackle various downstream NLP tasks and unify these 
tasks into generative pipelines. On the one hand, powerful language models, 
trained on massive textual data, have brought unparalleled accessibility and 
usability for both models and users. On the other hand, unrestricted access to 
these models can also introduce potential malicious and unintentional privacy 
risks. Despite ongoing efforts to address the safety and privacy concerns 
associated with LLMs, the problem remains unresolved. In this paper, we provide 
a comprehensive analysis of the current privacy attacks targeting LLMs and 
categorize them according to the adversary’s assumed capabilities to shed light 
on the potential vulnerabilities present in LLMs. Then, we present a detailed 
overview of prominent defense strategies that have been developed to counter 
these privacy attacks. Beyond existing works, we identify upcoming privacy 
concerns as LLMs evolve. Lastly, we point out several potential avenues for 
future exploration.


Date:			Wednesday, 18 October 2023

Time:                  	4:00pm - 6:00pm

Venue:			Room 5510
 			lifts 25/26

Committee Members:	Dr. Yangqiu Song (Supervisor)
   			Dr. Dongdong She (Chairperson)
 			Dr. Junxian He
 			Dr. Binhang Yuan


**** ALL are Welcome ****