This sample demonstrates DL model compression capabilities for Instance Segmentation task. The sample consists of basic steps such as DL model initialization, dataset preparation, training loop over epochs and validation steps. The sample receives a configuration file where the training schedule, hyper-parameters, and compression settings are defined.
At this point it is assumed that you have already installed nncf. You can find information on downloading nncf here.
To work with the sample you should install the corresponding Python package dependencies:
pip install -r examples/tensorflow/requirements.txt
This scenario demonstrates quantization with fine-tuning of Mask R-CNN with ResNet-50 backbone on the COCO2017 dataset.
The instance segmentation sample supports dataset only in TFRecords format.
To download the COCO2017 dataset and convert it to TFRecord format please use download_and_preprocess_coco.sh script from the official TensorFlow TPU repository.
bash <path_to_tensorflow_tpu_repo>/tools/datasets/download_and_preprocess_coco.sh <path_to_coco_data_dir>
This script installs the required libraries and then runs the dataset preprocessing. The output of the script is *.tfrecord
files in your local data directory.
The COCO2017 dataset should be specified in the configuration file as follows:
"dataset": "coco/2017"
We can run the sample after data preparation. For this follow these steps:
If you did not install the package, add the repository root folder to the PYTHONPATH
environment variable.
Go to the examples/tensorflow/segmentation
folder.
Download the pre-trained Mask-R-CNN weights in checkpoint format and provide the path to them using --weights
flag.
Specify the GPUs to be used for training by setting the environment variable CUDA_VISIBLE_DEVICES
. This is necessary because training and validation during training must be performed on different GPU devices. Please note that usually only one GPU is required for validation during training.
(Optional) Before compressing a model, it is highly recommended checking the accuracy of the pretrained model, use the following command:
python evaluation.py \
--mode=test \
--config=configs/quantization/mask_rcnn_coco_int8.json \
--weights=<path_to_ckpt_file_with_pretrained_weights> \
--data=<path_to_dataset> \
--batch-size=1 \
--disable-compression
Run the following command to start compression with fine-tuning on all available GPUs on the machine:
python train.py \
--config=configs/quantization/mask_rcnn_coco_int8.json \
--weights=<path_to_ckpt_file_with_pretrained_weights> \
--data=<path_to_dataset> \
--log-dir=../../results/quantization/maskrcnn_coco_int8
Use the --resume
flag with the path to the checkpoint to resume training from the defined checkpoint or folder with checkpoints to resume training from the last checkpoint.
To start checkpoints validation during training follow these steps:
If you did not install the package, add the repository root folder to the PYTHONPATH
environment variable.
Go to the examples/tensorflow/segmentation
folder.
Specify the GPUs to be used for validation during training by setting the environment variable CUDA_VISIBLE_DEVICES
.
Run the following command to start checkpoints validation during training:
python evaluation.py \
--mode=train \
--config=configs/quantization/mask_rcnn_coco_int8.json \
--data=<path_to_dataset> \
--batch-size=1 \
--checkpoint-save-dir=<path_to_checkpoints>
To estimate the test scores of your trained model checkpoint, use the following command:
python evaluation.py \
--mode=test \
--config=configs/quantization/mask_rcnn_coco_int8.json \
--data=<path_to_dataset> \
--batch-size=1 \
--resume=<path_to_trained_model_checkpoint>
To export trained model to the Frozen Graph, use the following command:
python evaluation.py \
--mode=export \
--config=configs/quantization/mask_rcnn_coco_int8.json \
--batch-size=1 \
--resume=<path_to_trained_model_checkpoint> \
--to-frozen-graph=../../results/mask_rcnn_coco_int8.pb
To export trained model to the SavedModel, use the following command:
python evaluation.py \
--mode=export \
--config=configs/quantization/mask_rcnn_coco_int8.json \
--batch-size=1 \
--resume=<path_to_trained_model_checkpoint> \
--to-saved-model=../../results/saved_model
To export a model to the OpenVINO IR and run it using the Intel® Deep Learning Deployment Toolkit, refer to this tutorial.
Download pre-trained ResNet-50 checkpoint from here.
If you did not install the package, add the repository root folder to the PYTHONPATH
environment variable.
Go to the examples/tensorflow/segmentation
folder.
Run the following command to start training MaskRCNN from scratch on all available GPUs on the machine:
python train.py \
--config=configs/mask_rcnn_coco.json \
--backbone-checkpoint=<path_to_resnet50-2018-02-07_folder> \
--data=<path_to_dataset> \
--log-dir=../../results/quantization/maskrcnn_coco_baseline
Please see compression results for Tensorflow instance segmentation at our Model Zoo page.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。