The official implementation of the ICASSP paper "A simple way of multimodal and arbitrary style transfer".
Download the MS COCO 2017 train and valid sets here. Put all the downloaded folders into a folder.
Next download the Wikiart train/test dataset. Also put the train/test split csv files into the downloaded folder
First, download the trained normalized VGG weight here.
We bootstrap the network from a trained AdaIN. To train an AdaIN model in Theano, please see here. We provide a trained AdaIN weight here. Put all the weight files into the project root.
To train a network using the default settings, use
python train.py path/to/COCO-train-valid-root path/to/WikiArt
To test the network, prepare a folder of input images, a folder of style images, and a pretrained model. A trained model can be downloaded here. Then execute
python test.py path/to/input/images path/to/style/images/ path/to/a/trained/weight/file
By default, the script will generate 5 images/style. User --help
to see more options.
If you use this implementation in your paper, please kindly cite the following paper
@INPROCEEDINGS{multar,
author={A. {Nguyen} and S. {Choi} and W. {Kim} and S. {Lee}},
booktitle={ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={A Simple Way of Multimodal and Arbitrary Style Transfer},
year={2019},
volume={},
number={},
pages={1752-1756},
keywords={Image style transfer;convolutional neural network;deep learning},
doi={10.1109/ICASSP.2019.8683493},
ISSN={2379-190X},
month={May},}
-
"Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization" by Huang et al.
-
For more information, checkout my implementation of AdaIN.