More about HKUST
Optimization Approaches for Learning with Low-rank Regularization
PhD Qualifying Examination Title: "Optimization Approaches for Learning with Low-rank Regularization" by Mr. Quanming YAO Abstract: Low-rank modeling has a lot of important applications in machine learning, computer vision and social network analysis. As direct rank minimization is NP hard, many alternative choices have been proposed. In this survey, we first introduce optimization approaches for two popular methods on rank minimization, i.e., nuclear norm regularization and rank constraint. Nuclear norm is the tightest convex envelope of rank function, therefore low-rank optimization using nuclear norm regularization is a convex problem where many convex optimization approaches can be applied. When rank constraint is used, the resulting optimization problems become simpler but are generally non-convex. Thus, algorithms working for these problems are lack of global optimal and may suffer from slow convergence. Except above two common approaches, adaptive non-convex regularizers have recently been proposed, which can better fit singular values. The key idea behind these regularizers is that large singular values are more informative, and thus should be less penalized. The optimization problems are neither smooth nor convex, thus are harder than with nuclear norm regularization or rank constraint. Several algorithms are developed recently which can be applied on these problems, and they are introduced in this survey. Helpful remarks are given for algorithms working within same type of regularizer; and then experiments are performed with both synthetic and real data sets to compare above three different types of regularizers. Finally, we discuss some possible research issues. Date: Wednesday, 7 September 2016 Time: 2:00pm - 4:00pm Venue: Room 4475 Lifts 25/26 Committee Members: Prof. James Kwok (Supervisor) Prof. Nevin Zhang (Chairperson) Dr. Yangqiu Song Prof. Dit-Yan Yeung **** ALL are Welcome ****