24 Star 231 Fork 93

PaddlePaddle / PaddleSeg

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
cpp_inference_linux.md 3.35 KB
一键复制 编辑 原始数据 按行查看 历史

English | 简体中文

Paddle Inference Deployment on Linux (C++)

1. Description

This document introduces an example of deploying a segmentation model on a Linux server (NV GPU or X86 CPU) using Paddle Inference's C++ interface. The main steps include:

  • Prepare the environment
  • Prepare models and pictures
  • Compile and execute

PaddlePaddle provides multiple prediction engine deployment models (as shown in the figure below) for different scenarios. For details, please refer to document.

inference_ecosystem

2. Prepare the environment

Prepare Paddle Inference C++ prediction library

You can download the Paddle Inference C++ prediction library from link.

Pay attention to select the exact version according to the machine's CUDA version, cudnn version, using MKLDNN or OpenBlas, whether to use TenorRT and other information. It is recommended to choose a prediction library with version >= 2.0.1.

Download the paddle_inference.tgz compressed file and decompress it, and save the decompressed paddle_inference file to PaddleSeg/deploy/cpp/.

If you need to compile the Paddle Inference C++ prediction library, you can refer to the document, which will not be repeated here.

Prepare OpenCV

This example uses OpenCV to read images, so OpenCV needs to be prepared.

Run the following commands to download, compile, and install OpenCV.

sh install_opencv.sh

Install Yaml, Gflags and Glog

This example uses Yaml, Gflags and Glog.

Run the following commands to download, compile, and install these libs.

sh install_yaml.sh
sh install_gflags.sh
sh install_glog.sh

3. Prepare models and pictures

Execute the following command in the PaddleSeg/deploy/cpp/ directory to download the test model for testing. If you need to test other models, please refer to documentation to export the prediction model.

wget https://paddleseg.bj.bcebos.com/dygraph/demo/pp_liteseg_infer_model.tar.gz
tar xf pp_liteseg_infer_model.tar.gz

Download one image from the validation set of cityscapes.

wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png

4. Compile and execute

Please check that PaddleSeg/deploy/cpp/ stores prediction libraries, models, and pictures, as follows.

PaddleSeg/deploy/cpp
|-- paddle_inference # prediction library
|-- pp_liteseg_infer_model # model
|-- cityscapes_demo.png # image

Execute sh run_seg_cpu.sh, it will compile and then perform prediction on X86 CPU.

Execute sh run_seg_gpu.sh, it will compile and then perform prediction on Nvidia GPU.

The segmentation result will be saved in the "out_img.jpg" image in the current directory, as shown below. Note that this image is using histogram equalization for easy visualization.

out_img

Python
1
https://gitee.com/paddlepaddle/PaddleSeg.git
git@gitee.com:paddlepaddle/PaddleSeg.git
paddlepaddle
PaddleSeg
PaddleSeg
release/2.9

搜索帮助