Pytorch implementation for FERAtt neural net. Facial Expression Recognition with Attention Net (FERAtt), is based on the dual-branch architecture and consists of four major modules: (i) an attention module $$G_{att}$$ to extract the attention feature map, (ii) a feature extraction module $G_{ft}$ to obtain essential features from the input image $I$, (iii) a reconstruction module $G_{rec}$ to estimate a good attention image $I_{att}$, and (iv) a representation module $G_{rep}$ that is responsible for the representation and classification of the facial expression image.
$git clone https://github.com/pedrodiamel/pytorchvision.git
$cd pytorchvision
$python setup.py install
$pip install -r installation.txt
We now support Visdom for real-time loss visualization during training!
To use Visdom in the browser:
# First install Python server and client
pip install visdom
# Start the server (probably in a screen or tmux)
python -m visdom.server -env_path runs/visdom/
# http://localhost:8097/
./train_bu3dfe.sh
./train_ck.sh
If you find this useful for your research, please cite the following paper.
@article{fernandez2019feratt,
title={FERAtt: Facial Expression Recognition with Attention Net},
author={Fernandez, Pedro D Marrero and Pe{\~n}a, Fidel A Guerrero and Ren, Tsang Ing and Cunha, Alexandre},
journal={arXiv preprint arXiv:1902.03284},
year={2019}
}
Gratefully acknowledge financial support from the Brazilian government agency FACEPE.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。