Improving Meta-Continual Learning Representations with Representation Replay

MPhil Thesis Defence


Title: "Improving Meta-Continual Learning Representations with Representation 
Replay"

By

Mr. Lawrence Ki-on CHAN


Abstract

Continual learning often suffers from catastrophic forgetting. Recently, 
meta-continual learning algorithms use meta-learning to learn how to 
continually learn. A recent state-of-the-art is online aware meta-learning 
(OML). This can be further improved by incorporating experience replay (ER) 
into its meta-testing. However, the use of ER only in meta-testing but not in 
meta-training suggests that the model may not be optimally meta-trained. In 
this paper, we remove this inconsistency in the use of ER and improve continual 
learning representations by integrating ER also into meta-training. We propose 
to store the samples' representations, instead of the samples themselves, into 
the replay buffer. This ensures the batch nature of ER  does not conflict with 
the online-aware nature of OML. Moreover, we introduce a meta-learned samples 
selection scheme to replace the widely used reservoir sampling to populate the 
replay buffer. This allows the most significant samples to be stored, rather 
than relying on randomness. Class-balanced modifiers are further added to the 
samples selection scheme to ensure each class has sufficient samples stored in 
the replay buffer. Experimental results on a number of real-world 
meta-continual learning benchmark data sets demonstrate that the proposed 
method outperforms the state-of-the-art. Moreover, the learned representations 
have better clustering structures and are more discriminative.


Date:  			Monday, 9 August 2021

Time:			10:30am - 12:30pm

Zoom meeting: 
https://hkust.zoom.us/j/93290668906?pwd=UG9kNUcwRG4rUEIrUFBOOWNRWmdxdz09

Committee Members:	Prof. James Kwok (Supervisor)
 			Prof. Raymond Wong (Chairperson)
 			Dr. Brian Mak


**** ALL are Welcome ****