Contrastive Curriculum Augmentation Framework for Self-Supervised Learning
Abstract - With increase in unstructured data everyda, learning underlying structure of data has become rather important compared to the alternative of manually labelling data which is very costly. The primary goal of self-supervised learning methods is to capture the fundamental representations of data regardless of labels. In a contrastive learning setting, we have created a curriculum augmentation framework and trained a dual network with the help of that framework. Our framework gradually updates augmentation parameters within a set limit and progressively make the images harder to classify for every successive iteration. We divided our framework into static composure and dynamic composure sub parts and found static composure works better because of comparatively less catastrophic forgetting than dynamic composure. We have shown in our experiments that our curriculum augmentation framework indeed works better than standard augmentations. We also developed ProAug which supports our novel curriculum augmentation framework in both supervised and self-supervised training paradigm.
Keywords - Self-Supervised Learning, Contrastive Learn- ing, Augmentation, Curriculum Learning