More about HKUST
Accelerating Large-Scale Deep Learning Tasks
PhD Thesis Proposal Defence Title: "Accelerating Large-Scale Deep Learning Tasks" by Mr. Lipeng WANG Abstract: Large-scale deep learning tasks usually run on parallel and distributed frameworks such as TensorFlow and PyTorch, and take hours to days to obtain training results. These frameworks utilize hardware accelerators, especially GPUs, to speed up the computation. However, data access and processing in these tasks takes a significant amount of time. Therefore, we propose to accelerate these tasks by improving their dataset storage and processing. Specifically, we develop DIESEL, a scalable dataset storage and caching system that runs between a training framework and the underlying distributed file system. The main features of DIESEL include metadata snapshot, per-task distributed cache, and chunk-based storage and shuffle. Furthermore, we optimize a GPU-assisted image decoding method for training tasks on image datasets. Our experiments on real-world training tasks show that (1) DIESEL halves the data access time and reduces the training time by around 15%-27%, and (2) our optimized image decoding method is 30%-50% faster than existing GPU-accelerated image decoding libraries. Date: Friday, 4 September 2020 Time: 10:00am - 12:00noon Zoom Meeting: https://hkust.zoom.us/j/99015360724 Committee Members: Dr. Qiong Luo (Supervisor) Dr. Kai Chen (Chairperson) Dr. Qifeng Chen Dr. Wei Wang **** ALL are Welcome ****