More about HKUST
A Survey on Vision-based Sign Language Recognition and Translation via Deep Learning
PhD Qualifying Examination Title: "A Survey on Vision-based Sign Language Recognition and Translation via Deep Learning" by Mr. Zhe NIU Abstract: Sign language recognition and translation (SLR and SLT) aim to bridge the communication gap between the deaf and hearing people by transcribing or translating the sign video into text, which is a challenging task that involves the expertise in computer vision and neural language processing. Over past decades, hand-crafted features together with statistical sequence modeling methods have been widely used in SLR and SLT. With the rapid growth of the deep learning techniques, researchers have been switching from the legacy recognition and translation pipeline to neural network-based end-to-end systems, which have achieved superior performance to the legacy method. Despite this, current end-to-end SLR and SLT systems suffer from the generalizability issue and are not suitable for realistic scenarios. In this survey, we give a comprehensive introduction to the neural network-based SLR and SLT systems. Several spatial and sequential feature extraction network and sequence modeling techniques are introduced together with some recent related works. Potential research directions are pointed out in the end. Date: Thursday, 9 January 2020 Time: 3:00pm - 5:00pm Venue: Room 3494 Lifts 25/26 Committee Members: Dr. Brian Mak (Supervisor) Prof. Dit-Yan Yeung (Chairperson) Dr. Qifeng Chen Dr. Yangqiu Song **** ALL are Welcome ****