More about HKUST
Pre-training, Fine-tuning and Adaptation for Language Models
PhD Qualifying Examination Title: "Pre-training, Fine-tuning and Adaptation for Language Models" by Mr. Shizhe DIAO Abstract: Self-supervised learning has reshaped the landscape of natural language processing (NLP) research and pushed the state-of-the-art on numerous tasks, such as text understanding and text generation. Large pre-trained models such as BERT are known to improve different downstream NLP tasks, even when such a model is trained on a generic domain. Moreover, recent studies have shown that when large domain-specific corpora are available, continued pre-training on domain-specific data can further improve the performance of in-domain tasks. However, this practice requires significant domain-specific data and computational resources which may not always be available. This survey aims to provide a systematic study on language model pre-training, fine-tuning and adaptation techniques. For each part, we review the existing approaches first and then discuss a specific example from our proposed methods to illustrate the model details and potential improvement direction. In the end, we will summarize new trends and potential future work to guide our research. Date: Wednesday, 12 October 2022 Time: 10:00am - 12:00noon Venue: Room 5501 lifts 25/26 Committee Members: Prof. Tong Zhang (Supervisor) Prof. Raymond Wong (Chairperson) Prof. Nevin Zhang Prof. Xiaofang Zhou **** ALL are Welcome ****