This repository contains the author's implementation in PyTorch for the paper "Adaptive Label-aware Graph Convolutional Networks for Cross-Modal Retrieval".
-
Python (>=3.7)
-
PyTorch (>=1.2.0)
-
Scipy (>=1.3.2)
You can download the features of the datasets from:
- MIRFlickr,
- NUS-WIDE(top-21 concepts)
Here we provide the implementation of ALGCN, along with datasets. The repository is organised as follows:
data/contains the necessary dataset files for NUS-WIDE and MIRFlickr;models.pycontains the implementation of theALGCN;
Finally, main.py puts all of the above together and can be used to execute a full training run on MIRFlcikr or NUS-WIDE.
- Place the datasets in
data/ - Change the
datasetinmain.pytomirflickrorNUS-WIDE-TC21. - Train a model:
python main.py- Modify the parameter
EVAL = Trueinmain.pyfor evaluation:
python main.pyIf you find our work or the code useful, please consider cite our paper using:
@article{qian2021adaptive,
title={Adaptive Label-aware Graph Convolutional Networks for Cross-Modal Retrieval},
author={Qian, Shengsheng and Xue, Dizhan and Fang, Quan and Xu, Changsheng},
journal={IEEE Transactions on Multimedia},
year={2021},
publisher={IEEE},
pages={1-1},
doi={10.1109/TMM.2021.3101642}
}