更换文档检测模型
This commit is contained in:
117
paddle_detection/deploy/fastdeploy/kunlunxin/python/README.md
Normal file
117
paddle_detection/deploy/fastdeploy/kunlunxin/python/README.md
Normal file
@@ -0,0 +1,117 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleDetection 昆仑芯 XPU Python部署示例
|
||||
|
||||
本目录下提供`infer.py`快速完成PPYOLOE模型在昆仑芯 XPU上的加速部署的示例。
|
||||
|
||||
## 1. 说明
|
||||
PaddleDetection支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署PaddleDetection模型。FastDeploy目前支持的模型系列,包括但不限于`PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,`SSD`,`PaddleYOLOv5`,`PaddleYOLOv6`,`PaddleYOLOv7`,`RTMDet`,`CascadeRCNN`,`PSSDet`,`RetinaNet`,`PPYOLOESOD`,`FCOS`,`TTFNet`,`TOOD`,`GFL`所有类名的构造函数和预测函数在参数上完全一致。所有模型的调用,只需要参考PPYOLOE的示例,即可快速调用。
|
||||
|
||||
## 2. 部署环境准备
|
||||
在部署前,需自行编译基于昆仑XPU的FastDeploy python wheel包并安装,参考文档[昆仑芯XPU部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
## 3. 部署模型准备
|
||||
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleDetection部署模型](../README.md)。
|
||||
|
||||
## 4. 运行部署示例
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
### 4.1 目标检测示例
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/PaddleDetection.git
|
||||
cd PaddleDetection/deploy/fastdeploy/kunlunxin/python
|
||||
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||
# git checkout develop
|
||||
|
||||
# 下载PPYOLOE模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||
|
||||
# 运行部署示例
|
||||
# 昆仑芯推理
|
||||
python infer.py --model_dir ppyoloe_crn_l_300e_coco --image_file 000000014439.jpg
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
|
||||
</div>
|
||||
|
||||
### 4.2 关键点检测示例
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/PaddleDetection.git
|
||||
cd PaddleDetection/deploy/fastdeploy/kunlunxin/python
|
||||
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||
# git checkout develop
|
||||
|
||||
# 下载PP-TinyPose模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
|
||||
tar -xvf PP_TinyPose_256x192_infer.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/hrnet_demo.jpg
|
||||
|
||||
# 运行部署示例
|
||||
python pptinypose_infer.py --model_dir PP_TinyPose_256x192_infer --image_file hrnet_demo.jpg
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/196386764-dd51ad56-c410-4c54-9580-643f282f5a83.jpeg", width=359px, height=423px />
|
||||
</div>
|
||||
|
||||
关于如何进行多人关键点检测,请参考[PPTinyPose Pipeline示例](./det_keypoint_unite/)
|
||||
|
||||
|
||||
## 5. 部署示例选项说明
|
||||
|
||||
|参数|含义|默认值
|
||||
|---|---|---|
|
||||
|--model_dir|指定模型文件夹所在的路径|None|
|
||||
|--image_file|指定测试图片所在的路径|None|
|
||||
|
||||
## 6. PaddleDetection Python接口
|
||||
FastDeploy目前支持的模型系列,包括但不限于`PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,`SSD`,`PaddleYOLOv5`,`PaddleYOLOv6`,`PaddleYOLOv7`,`RTMDet`,`CascadeRCNN`,`PSSDet`,`RetinaNet`,`PPYOLOESOD`,`FCOS`,`TTFNet`,`TOOD`,`GFL`所有类名的构造函数和预测函数在参数上完全一致。所有模型的调用,只需要参考PPYOLOE的示例,即可快速调用。
|
||||
|
||||
### 6.1 目标检测及实例分割模型
|
||||
```python
|
||||
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PicoDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOX(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.YOLOv3(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PPYOLO(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.FasterRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.MaskRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.SSD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOv5(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOv6(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOv7(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.RTMDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.CascadeRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PSSDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.RetinaNet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PPYOLOESOD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.FCOS(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.TTFNet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.TOOD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.GFL(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
### 6.2 关键点检测模型
|
||||
```python
|
||||
fd.vision.keypointdetection.PPTinyPose(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleDetection模型加载和初始化,其中model_file, params_file为导出的Paddle部署模型格式, config_file为PaddleDetection同时导出的部署配置yaml文件
|
||||
|
||||
## 7. 更多指南
|
||||
- [PaddleDetection Python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/object_detection.html)
|
||||
- [FastDeploy部署PaddleDetection模型概览](../../)
|
||||
- [C++部署](../cpp)
|
||||
|
||||
## 8. 常见问题
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|
||||
@@ -0,0 +1,65 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-PicoDet + PP-TinyPose (Pipeline) CPU-GPU Python部署示例
|
||||
|
||||
本目录下提供`det_keypoint_unite_infer.py`快速完成多人模型配置 PP-PicoDet + PP-TinyPose 在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图多人关键点检测`示例。执行如下脚本即可完成.**注意**: PP-TinyPose单模型独立部署,请参考[PP-TinyPose 单模型](../README.md)
|
||||
|
||||
## 1. 部署环境准备
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库。
|
||||
|
||||
## 2. 部署模型准备
|
||||
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../../README.md)或者[自行导出PaddleDetection部署模型](../../README.md)。
|
||||
|
||||
## 3. 运行部署示例
|
||||
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/PaddleDetection.git
|
||||
cd PaddleDetection/deploy/fastdeploy/kunlunxin/python/det_keypoint_unite
|
||||
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||
# git checkout develop
|
||||
|
||||
# 下载PP-TinyPose模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
|
||||
tar -xvf PP_TinyPose_256x192_infer.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
|
||||
tar -xvf PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/000000018491.jpg
|
||||
|
||||
# 运行部署示例
|
||||
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image_file 000000018491.jpg
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/196393343-eeb6b68f-0bc6-4927-871f-5ac610da7293.jpeg", width=640px, height=427px />
|
||||
</div>
|
||||
|
||||
- 关于如何通过FastDeploy使用更多不同的推理后端,以及如何使用不同的硬件,请参考文档:[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
|
||||
## 4. 部署示例选项说明
|
||||
|
||||
|参数|含义|默认值
|
||||
|---|---|---|
|
||||
|--tinypose_model_dir|指定关键点模型文件夹所在的路径|None|
|
||||
|--det_model_dir|指定目标模型文件夹所在的路径|None|
|
||||
|--image_file|指定测试图片所在的路径|None|
|
||||
|
||||
## 5. PPTinyPose 模型串联 Python接口
|
||||
|
||||
```python
|
||||
fd.pipeline.PPTinyPose(det_model=None, pptinypose_model=None)
|
||||
```
|
||||
|
||||
PPTinyPose Pipeline 模型加载和初始化,其中det_model是使用`fd.vision.detection.PicoDet`初始化的检测模型,pptinypose_model是使用`fd.vision.keypointdetection.PPTinyPose`初始化的关键点检测模型。
|
||||
|
||||
## 6. 更多指南
|
||||
- [PaddleDetection Python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/object_detection.html)
|
||||
- [FastDeploy部署PaddleDetection模型概览](../../../)
|
||||
- [C++部署](../../cpp)
|
||||
|
||||
## 7. 常见问题
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|
||||
@@ -0,0 +1,67 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
import os
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--tinypose_model_dir",
|
||||
required=True,
|
||||
help="path of paddletinypose model directory")
|
||||
parser.add_argument(
|
||||
"--det_model_dir", help="path of paddledetection model directory")
|
||||
parser.add_argument(
|
||||
"--image_file", required=True, help="path of test image file.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def build_picodet_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
option.use_kunlunxin()
|
||||
return option
|
||||
|
||||
|
||||
def build_tinypose_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
option.use_kunlunxin()
|
||||
return option
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
picodet_model_file = os.path.join(args.det_model_dir, "model.pdmodel")
|
||||
picodet_params_file = os.path.join(args.det_model_dir, "model.pdiparams")
|
||||
picodet_config_file = os.path.join(args.det_model_dir, "infer_cfg.yml")
|
||||
|
||||
# setup runtime
|
||||
runtime_option = build_picodet_option(args)
|
||||
det_model = fd.vision.detection.PicoDet(
|
||||
picodet_model_file,
|
||||
picodet_params_file,
|
||||
picodet_config_file,
|
||||
runtime_option=runtime_option)
|
||||
|
||||
tinypose_model_file = os.path.join(args.tinypose_model_dir, "model.pdmodel")
|
||||
tinypose_params_file = os.path.join(args.tinypose_model_dir, "model.pdiparams")
|
||||
tinypose_config_file = os.path.join(args.tinypose_model_dir, "infer_cfg.yml")
|
||||
# setup runtime
|
||||
runtime_option = build_tinypose_option(args)
|
||||
tinypose_model = fd.vision.keypointdetection.PPTinyPose(
|
||||
tinypose_model_file,
|
||||
tinypose_params_file,
|
||||
tinypose_config_file,
|
||||
runtime_option=runtime_option)
|
||||
|
||||
# predict
|
||||
im = cv2.imread(args.image_file)
|
||||
pipeline = fd.pipeline.PPTinyPose(det_model, tinypose_model)
|
||||
pipeline.detection_model_score_threshold = 0.5
|
||||
pipeline_result = pipeline.predict(im)
|
||||
print("Paddle TinyPose Result:\n", pipeline_result)
|
||||
|
||||
# visualize
|
||||
vis_im = fd.vision.vis_keypoint_detection(
|
||||
im, pipeline_result, conf_threshold=0.2)
|
||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
||||
print("TinyPose visualized result save in ./visualized_result.jpg")
|
||||
45
paddle_detection/deploy/fastdeploy/kunlunxin/python/infer.py
Normal file
45
paddle_detection/deploy/fastdeploy/kunlunxin/python/infer.py
Normal file
@@ -0,0 +1,45 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
import os
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--model_dir", required=True, help="Path of PaddleDetection model.")
|
||||
parser.add_argument(
|
||||
"--image_file", type=str, required=True, help="Path of test image file.")
|
||||
return parser.parse_args()
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
runtime_option = fd.RuntimeOption()
|
||||
runtime_option.use_kunlunxin()
|
||||
|
||||
if args.model_dir is None:
|
||||
model_dir = fd.download_model(name='ppyoloe_crn_l_300e_coco')
|
||||
else:
|
||||
model_dir = args.model_dir
|
||||
|
||||
model_file = os.path.join(model_dir, "model.pdmodel")
|
||||
params_file = os.path.join(model_dir, "model.pdiparams")
|
||||
config_file = os.path.join(model_dir, "infer_cfg.yml")
|
||||
|
||||
# settting for runtime
|
||||
model = fd.vision.detection.PPYOLOE(
|
||||
model_file, params_file, config_file, runtime_option=runtime_option)
|
||||
|
||||
# predict
|
||||
if args.image_file is None:
|
||||
image_file = fd.utils.get_detection_test_image()
|
||||
else:
|
||||
image_file = args.image_file
|
||||
im = cv2.imread(image_file)
|
||||
result = model.predict(im)
|
||||
print(result)
|
||||
|
||||
# visualize
|
||||
vis_im = fd.vision.vis_detection(im, result, score_threshold=0.5)
|
||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
||||
print("Visualized result save in ./visualized_result.jpg")
|
||||
@@ -0,0 +1,42 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
import os
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--model_dir",
|
||||
required=True,
|
||||
help="path of PP-TinyPose model directory")
|
||||
parser.add_argument(
|
||||
"--image_file", required=True, help="path of test image file.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
runtime_option = fd.RuntimeOption()
|
||||
runtime_option.use_kunlunxin()
|
||||
|
||||
tinypose_model_file = os.path.join(args.model_dir, "model.pdmodel")
|
||||
tinypose_params_file = os.path.join(args.model_dir, "model.pdiparams")
|
||||
tinypose_config_file = os.path.join(args.model_dir, "infer_cfg.yml")
|
||||
# setup runtime
|
||||
tinypose_model = fd.vision.keypointdetection.PPTinyPose(
|
||||
tinypose_model_file,
|
||||
tinypose_params_file,
|
||||
tinypose_config_file,
|
||||
runtime_option=runtime_option)
|
||||
|
||||
# predict
|
||||
im = cv2.imread(args.image_file)
|
||||
tinypose_result = tinypose_model.predict(im)
|
||||
print("Paddle TinyPose Result:\n", tinypose_result)
|
||||
|
||||
# visualize
|
||||
vis_im = fd.vision.vis_keypoint_detection(
|
||||
im, tinypose_result, conf_threshold=0.5)
|
||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
||||
print("TinyPose visualized result save in ./visualized_result.jpg")
|
||||
Reference in New Issue
Block a user