Deep and Adversarial Knowledge Transfer in Recommendation

PhD Thesis Proposal Defence


Title: "Deep and Adversarial Knowledge Transfer in Recommendation"

by

Mr. Guangneng HU


Abstract:

Recommendation is a basic service to filter information and guides users from a 
large space of items at various online systems, achieving improved user 
satisfaction and increased corporate revenues. It works by learning user 
preferences on items from their historical interactions. Recent deep learning 
techniques bring in advancements of recommender models. In real-world 
scenarios, however, interactions may well be sparse in a target domain of 
interest, and thus it hurts the huge success of deep models which are depending 
on large-scale labeled data. Transfer learning is studied to address the data 
sparsity by transferring the knowledge from auxiliary source domains.

In this proposal, we investigate deep knowledge transfer in recommendation, of 
that the core idea is to answer what to transfer between domains. Specifically, 
we propose three models in different transfer learning approaches, i.e., deep 
model-based transfer (DMT), deep instance-based transfer (DIT), and deep 
feature-based transfer (DFT). Firstly, in DMT, we transfer parameters in lower 
layers and learn source and target networks in a multi-task way. The CoNet 
model is introduced to learn dual knowledge transfer across domains and is 
capable of selecting knowledge to transfer via the sparsity-induced technique. 
Next, in DIT, we transfer certain parts of instances in the source domain by 
adaptively re-weighting them to be used in the target domain. The TransNet 
model is introduced to learn an adaptive transfer vector to capture relations 
between the target item and source items. Finally, in DFT, we transfer a 
``good'' feature representation that captures the invariant while reduces the 
difference between domains. The TrNews model is introduced to transfer 
heterogeneous user interests across domains and transfer item representations 
selectively. Our models can be used for modeling both relational data (e.g., 
clicks), content data (e.g., news), and their combinations (hybrid data).

Furthermore, as transfer learning relies on auxiliary data from other sources, 
it raises privacy concerns during the knowledge transfer from source parties to 
the target party. To solve this problem and make transfer learning better 
applied, we will design a new privacy-preserving learning algorithm via 
adversarial knowledge transfer. In this algorithm, we will show how to learn a 
privacy-aware neural representation by improving the target performance as well 
as protecting the source privacy.


Date:			Wednesday, 17 March 2021

Time:                  	10:00am - 12:00noon

Zoom Meeting: 		https://hkust.zoom.us/j/5394566475

Committee Members:	Prof. Qiang Yang (Supervisor)
  			Dr. Kai Chen (Chairperson)
 			Prof. Huamin Qu
 			Dr. Yangqiu Song


**** ALL are Welcome ****