More about HKUST
A Survey of Safety of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defense, and Interpretability
PhD Qualifying Examination Title: "A Survey of Safety of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defense, and Interpretability" by Mr. Jaewoo SONG Abstract: Deep neural networks (DNNs) have been showing high level of performance previously unreachable by other machine learning techniques. Powered by high computation power and abundance of data these days, DNNs are used in many different areas. Nonetheless, the black-box nature of DNNs makes it hard for a human to understand why and how they work so well. Consequently, many questions have arisen regarding the safety of DNNs. Safety of DNNs can be analyzed in various aspects. Among those are four main aspects: verification, testing, adversarial attack and defense, and interpretability. Verification is a process of checking whether certain properties hold on the DNNs with provable guarantees. Testing is about generating and running test cases on DNNs under suitable coverage criteria. Adversarial attack creates perturbed inputs in order to show the lack of robustness of DNNs while adversarial defense aims to detect the attack and make the DNNs more robust. Interpretability is achieved by making DNNs more understandable to humans. This survey paper describes and summarizes researches on these four main aspects. Date: Tuesday, 26 November 2019 Time: 10:00am - 12:00noon Venue: Room 4475 Lifts 25/26 Committee Members: Prof. Fangzhen Lin (Supervisor) Dr. Qiong Luo (Chairperson) Prof. Shing-Chi Cheung Dr. Shuai Wang **** ALL are Welcome ****