This is the implementation of 'SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning' [AAAI 2021]. The original paper can be found at https://arxiv.org/abs/2008.00975 .
- torch
- torchvision
- liblinear
This implementation only supports multi-gpu, DistributedDataParallel training, which is faster and simpler; single-gpu or DataParallel training is not supported.
To do unsupervised pre-training of a MoCo initialized ResNet-50 model, download the weights MoCo v2 (200epochs) to the pretrain folder, and run:
bash main_train.sh
With a pre-trained model, to train a supervised linear SVM classifier on frozen features/weights, put the python interface of liblinear into the liblinear folder, and run:
bash main_val.sh
If you find this code useful for your research, please cite our paper:
@inproceedings{yao2021seco,
title={SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning},
author={Yao, Ting and Zhang, Yiheng and Qiu, Zhaofan and Pan, Yingwei and Mei, Tao},
booktitle={35th AAAI Conference on Artificial Intelligence},
year={2021}
}