1 Star 0 Fork 0

openResource / STGAN

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

STGAN (CVPR 2019)

Tensorflow implementation of STGAN: A Unified Selective Transfer Network for Arbitrary Image Attribute Editing


Overall architecture of our STGAN. Taking the image above as an example, in the difference attribute vector $\mathbf{att}_\mathit{diff}$, $Young$ is set to 1, $Mouth\\ Open$ is set to -1, and others are set to zeros. The outputs of $\mathit{D_{att}}$ and $\mathit{D_{adv}}$ are the scalar $\mathit{D_{adv}}(\mathit{G}(\mathbf{x}, \mathbf{att}_\mathit{diff}))$ and the vector $\mathit{D_{att}}(\mathit{G}(\mathbf{x}, \mathbf{att}_\mathit{diff}))$, respectively

Exemplar Results

  • See results.md for more results

  • Facial attribute editing results


    Facial attribute editing results on the CelebA dataset. The rows from top to down are results of IcGAN, FaderNet, AttGAN, StarGAN and STGAN.


    High resolution ($384\times384$) results of STGAN for facial attribute editing.
  • Image translation results


    Results of season translation, the top two rows are $summer \rightarrow winter$, and the bottom two rows are $winter \rightarrow summer$.

Preparation

  • Prerequisites

    • Tensorflow (r1.4 - r1.12 should work fine)
    • Python 3.x with matplotlib, numpy and scipy
  • Dataset

    • CelebA dataset (Find more details from the project page)
      • Images should be placed in DATAROOT/img_align_celeba/*.jpg
      • Attribute labels should be placed in DATAROOT/list_attr_celeba.txt
      • If google drive is unreachable, you can get the data from Baidu Cloud
    • We follow the settings of AttGAN, kindly refer to AttGAN for more dataset preparation details
  • pre-trained model

Quick Start

Exemplar commands are listed here for a quick start.

Training

  • for 128x128 images

    python train.py --experiment_name 128
  • for 384x384 images (please prepare data according to HD-CelebA)

    python train.py --experiment_name 384 --img_size 384 --enc_dim 48 --dec_dim 48 --dis_dim 48 --dis_fc_dim 512 --n_sample 24 --use_cropped_img

Testing

  • Example of testing single attribute

    python test.py --experiment_name 128 [--test_int 1.0]
  • Example of testing multiple attributes

    python test.py --experiment_name 128 --test_atts Pale_Skin Male [--test_ints 1.0 1.0]
  • Example of attribute intensity control

    python test.py --experiment_name 128 --test_slide --test_att Male [--test_int_min -1.0 --test_int_max 1.0 --n_slide 10]

The arguments in [] are optional with a default value.

View Images

You can use show_image.py to show the generated images, the code has been tested on Windows 10 and Ubuntu 16.04 (python 3.6). If you want to change the width of the buttons in the bottom, you can change width parameter in the 160th line. the '+++' and '---' on the button indicate that the above image is modified to 'add' or 'remove' the attribute. Note that you should specify the path of the attribute file (list_attr_celeba.txt) of CelebA in the 82nd line.

NOTE:

  • You should give the path of the data by adding --dataroot DATAROOT;
  • You can specify which GPU to use by adding --gpu GPU, e.g., --gpu 0;
  • You can specify which image(s) to test by adding --img num (e.g., --img 182638, --img 200000 200001 200002), where the number should be no larger than 202599 and is suggested to be no smaller than 182638 as our test set starts at 182638.png.
  • You can modify the model by using following arguments
    • --label: 'diff'(default) for difference attribute vector, 'target' for target attribute vector
    • --stu_norm: 'none'(default), 'bn' or 'in' for adding no/batch/instance normalization in STUs
    • --mode: 'wgan'(default), 'lsgan' or 'dcgan' for differenct GAN losses
    • More arguments please refer to train.py

AttGAN

  • Train with AttGAN model by

    python train.py --experiment_name attgan_128 --use_stu false --shortcut_layers 1 --inject_layers 1

Citation

If you find STGAN useful in your research work, please consider citing:

@InProceedings{liu2019stgan,
  title={STGAN: A Unified Selective Transfer Network for Arbitrary Image Attribute Editing},
  author={Liu, Ming and Ding, Yukang and Xia, Min and Liu, Xiao and Ding, Errui and Zuo, Wangmeng and Wen, Shilei},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019}
}

Acknowledgement

The code is built upon AttGAN, thanks for their excellent work!

MIT License Copyright (c) 2019 Vision Perception and Cognition Group Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

暂无描述 展开 收起
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/Yanghony/STGAN.git
git@gitee.com:Yanghony/STGAN.git
Yanghony
STGAN
STGAN
master

搜索帮助