Continual Self Supervised Learning through distillation and replay

Published:

Without relying on human annotations, self-supervised learning aims to learn useful representations of input data. When trained offline with enormous amounts of unlabeled data, self-supervised models have been found to provide visual representations that are equivalent to or better than supervised models. However, in continual learning (CL) circumstances, when data is fed to the model sequentially, its efficacy is drastically diminished. Numerous ongoing learning techniques have recently been presented for a variety of computer vision problems. In this study, by utilizing distillation and proofreading, we tackle the extremely challenging problem of continuously learning a usable represenhttps://zindi.africa/competitions/arm-unicef-disaster-vulnerability-challenge/datatation in which input data arrives sequentially. We can prevent severe forgetfulness and continue to train our models by adding a prediction layer that forces the current representations vectors to precisely match the frozen learned representations and an effective selection memory for proofreading previous data.

Language: Python Task: Continual Leaning, Class-IL, Task-IL, Data-IL, Self-Supervised Learning

Code : cssl-distill-replay