More about HKUST
Adversarial Robustness and Generalization for Natural Language Processing
Speaker: Dr. Di Jin
Amazon Alexa AI
U.S.A.
Title: "Adversarial Robustness and Generalization for
Natural Language Processing"
Date: Friday, 22 January 2021
Time: 10:00 am - 11:00 am
Zoom Meeting:
https://hkust.zoom.us/j/465698645?pwd=c2E4VTE3b2lEYnBXcyt4VXJITXRIdz09
Meeting ID: 465 698 645
Passcode: 20202021
Abstract:
Deep learning and large-scale unsupervised pre-training has remarkably
accelerated the development of natural language processing (NLP). The best
models can now achieve comparable or even superior performance compared
with human, which gives us the impression that NLP problems may have been
solved. However, when we adopt these models into real-world applications,
much evidence has shown us that they are still not robust against the real
data that may contain some levels of noise. This points out to us the
great importance of examining and enhancing the model robustness. In this
presentation, we will introduce approaches to evaluating and improving the
robustness of NLP models based on adversarial attack and learning. We will
see that exposing these models against adversarial samples can make them
more robust and thus better generalize to unseen data.
*********************
Biography:
Di Jin is now a research scientist at Amazon Alexa AI, USA, working on
conversational modeling. He graduated from MIT as a PhD in Sep. of 2020,
supervised by Prof. Peter Szolovits. He works on Natural Language
Processing (NLP) and its applications into the healthcare domain. Previous
works focused on sequential sentence classification, transfer learning for
low-resource data, adversarial attacking and defense, and text
editing/rewriting.