更换文档检测模型

This commit is contained in:
2024-08-27 14:42:45 +08:00
parent aea6f19951
commit 1514e09c40
2072 changed files with 254336 additions and 4967 deletions

View File

@@ -0,0 +1,101 @@
简体中文 | [English](README_en.md)
# Dense Teacher: Dense Pseudo-Labels for Semi-supervised Object Detection
## FCOS模型库
| 模型 | 监督数据比例 | Sup Baseline | Sup Epochs (Iters) | Sup mAP<sup>val<br>0.5:0.95 | Semi mAP<sup>val<br>0.5:0.95 | Semi Epochs (Iters) | 模型下载 | 配置文件 |
| :------------: | :---------: | :---------------------: | :---------------------: |:---------------------------: |:----------------------------: | :------------------: |:--------: |:----------: |
| DenseTeacher-FCOS | 5% | [sup_config](../baseline/fcos_r50_fpn_2x_coco_sup005.yml) | 24 (8712) | 21.3 | **30.6** | 240 (87120) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_fcos_r50_fpn_coco_semi005.pdparams) | [config](./denseteacher_fcos_r50_fpn_coco_semi005.yml) |
| DenseTeacher-FCOS | 10% | [sup_config](../baseline/fcos_r50_fpn_2x_coco_sup010.yml) | 24 (17424) | 26.3 | **35.1** | 240 (174240) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_fcos_r50_fpn_coco_semi010.pdparams) | [config](./denseteacher_fcos_r50_fpn_coco_semi010.yml) |
| DenseTeacher-FCOS(LSJ)| 10% | [sup_config](../baseline/fcos_r50_fpn_2x_coco_sup010.yml) | 24 (17424) | 26.3 | **37.1(LSJ)** | 240 (174240) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_fcos_r50_fpn_coco_semi010_lsj.pdparams) | [config](./denseteacher_fcos_r50_fpn_coco_semi010_lsj.yml) |
| DenseTeacher-FCOS |100%(full)| [sup_config](../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.ymll) | 24 (175896) | 42.6 | **44.2** | 24 (175896)| [download](https://paddledet.bj.bcebos.com/models/denseteacher_fcos_r50_fpn_coco_full.pdparams) | [config](./denseteacher_fcos_r50_fpn_coco_full.yml) |
**注意:**
- 以上模型训练默认使用8 GPUs监督数据总batch_size默认为16无监督数据总batch_size默认也为16默认初始学习率为0.01。如果改动了总batch_size请按线性比例相应地调整学习率
- **监督数据比例**是指使用的有标签COCO数据集占 COCO train2017 全量训练集的百分比使用的无标签COCO数据集一般也是相同比例但具体图片和有标签数据的图片不重合
- `Semi Epochs (Iters)`表示**半监督训练**的模型的 Epochs (Iters),如果使用**自定义数据集**需自行根据Iters换算到对应的Epochs调整最好保证总Iters 和COCO数据集的设置较为接近
- `Sup mAP`是**只使用有监督数据训练**的模型的精度,请参照**基础检测器的配置文件** 和 [baseline](../baseline)
- `Semi mAP`是**半监督训练**的模型的精度,模型下载和配置文件的链接均为**半监督模型**
- `LSJ`表示 **large-scale jittering**,表示使用更大范围的多尺度训练,可进一步提升精度,但训练速度也会变慢;
- 半监督检测的配置讲解,请参照[文档](../README.md/#半监督检测配置)
- `Dense Teacher`原文使用`R50-va-caffe`预训练PaddleDetection中默认使用`R50-vb`预训练,如果使用`R50-vd`结合[SSLD](../../../docs/feature_models/SSLD_PRETRAINED_MODEL.md)的预训练模型可进一步显著提升检测精度同时backbone部分配置也需要做出相应更改
```python
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_vd_ssld_v2_pretrained.pdparams
ResNet:
depth: 50
variant: d
norm_type: bn
freeze_at: 0
return_idx: [1, 2, 3]
num_stages: 4
lr_mult_list: [0.05, 0.05, 0.1, 0.15]
```
## PPYOLOE+ 模型库
| 模型 | 监督数据比例 | Sup Baseline | Sup Epochs (Iters) | Sup mAP<sup>val<br>0.5:0.95 | Semi mAP<sup>val<br>0.5:0.95 | Semi Epochs (Iters) | 模型下载 | 配置文件 |
| :------------: | :---------: | :---------------------: | :---------------------: |:---------------------------: |:----------------------------: | :------------------: |:--------: |:----------: |
| DenseTeacher-PPYOLOE+_s | 5% | [sup_config](../baseline/ppyoloe_plus_crn_s_80e_coco_sup005.yml) | 80 (14480) | 32.8 | **34.0** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_s_coco_semi005.pdparams) | [config](./denseteacher_ppyoloe_plus_crn_s_coco_semi005.yml) |
| DenseTeacher-PPYOLOE+_s | 10% | [sup_config](../baseline/ppyoloe_plus_crn_s_80e_coco_sup010.yml) | 80 (14480) | 35.3 | **37.5** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_s_coco_semi010.pdparams) | [config](./denseteacher_ppyoloe_plus_crn_s_coco_semi010.yml) |
| DenseTeacher-PPYOLOE+_l | 5% | [sup_config](../baseline/ppyoloe_plus_crn_s_80e_coco_sup005.yml) | 80 (14480) | 42.9 | **45.4** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_l_coco_semi005.pdparams) | [config](./denseteacher_ppyoloe_plus_crn_l_coco_semi005.yml) |
| DenseTeacher-PPYOLOE+_l | 10% | [sup_config](../baseline/ppyoloe_plus_crn_l_80e_coco_sup010.yml) | 80 (14480) | 45.7 | **47.4** | 200 (36200) | [download](https://paddledet.bj.bcebos.com/models/denseteacher_ppyoloe_plus_crn_l_coco_semi010.pdparams) | [config](./denseteacher_ppyoloe_plus_crn_l_coco_semi010.yml) |
## 使用说明
仅训练时必须使用半监督检测的配置文件去训练,评估、预测、部署也可以按基础检测器的配置文件去执行。
### 训练
```bash
# 单卡训练 (不推荐,需按线性比例相应地调整学习率)
CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml --eval
# 多卡训练
python -m paddle.distributed.launch --log_dir=denseteacher_fcos_semi010/ --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml --eval
```
### 评估
```bash
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml -o weights=output/denseteacher_fcos_r50_fpn_coco_semi010/model_final.pdparams
```
### 预测
```bash
CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml -o weights=output/denseteacher_fcos_r50_fpn_coco_semi010/model_final.pdparams --infer_img=demo/000000014439.jpg
```
### 部署
部署可以使用半监督检测配置文件,也可以使用基础检测器的配置文件去部署和使用。
```bash
# 导出模型
CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/semi_det/denseteacher/denseteacher_fcos_r50_fpn_coco_semi010.yml -o weights=https://paddledet.bj.bcebos.com/models/denseteacher_fcos_r50_fpn_coco_semi010.pdparams
# 导出权重预测
CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/denseteacher_fcos_r50_fpn_coco_semi010 --image_file=demo/000000014439_640x640.jpg --device=GPU
# 部署测速
CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/denseteacher_fcos_r50_fpn_coco_semi010 --image_file=demo/000000014439_640x640.jpg --device=GPU --run_benchmark=True # --run_mode=trt_fp16
# 导出ONNX
paddle2onnx --model_dir output_inference/denseteacher_fcos_r50_fpn_coco_semi010/ --model_filename model.pdmodel --params_filename model.pdiparams --opset_version 12 --save_file denseteacher_fcos_r50_fpn_coco_semi010.onnx
```
## 引用
```
@article{denseteacher2022,
title={Dense Teacher: Dense Pseudo-Labels for Semi-supervised Object Detection},
author={Hongyu Zhou, Zheng Ge, Songtao Liu, Weixin Mao, Zeming Li, Haiyan Yu, Jian Sun},
journal={arXiv preprint arXiv:2207.02541},
year={2022}
}
```

View File

@@ -0,0 +1,166 @@
_BASE_: [
'denseteacher_fcos_r50_fpn_coco_semi010.yml',
'../_base_/coco_detection_full.yml',
]
log_iter: 100
snapshot_epoch: 2
epochs: &epochs 24
weights: output/denseteacher_fcos_r50_fpn_coco_full/model_final
### pretrain and warmup config, choose one and comment another
# pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/fcos_r50_fpn_iou_multiscale_2x_coco.pdparams # mAP=42.6
# semi_start_iters: 0
# ema_start_iters: 0
# use_warmup: &use_warmup False
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_cos_pretrained.pdparams
semi_start_iters: 5000
ema_start_iters: 3000
use_warmup: &use_warmup True
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 2.0, distill_loss_box: 1.0, distill_loss_quality: 1.0}
concat_sup_data: True
suppress: linear
ratio: 0.01
gamma: 2.0
test_cfg:
inference_on: teacher
### reader config
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 1}
- RandomFlip: {}
weak_aug:
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
sup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
- Gt2FCOSTarget:
object_sizes_boundary: [64, 128, 256, 512]
center_sampling_radius: 1.5
downsample_ratios: [8, 16, 32, 64, 128]
num_shift: 0.5
norm_reg_targets: True
unsup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
sup_batch_size: 2
unsup_batch_size: 2
shuffle: True
drop_last: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
TestReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
fuse_normalize: True
### model config
architecture: FCOS
FCOS:
backbone: ResNet
neck: FPN
fcos_head: FCOSHead
ResNet:
depth: 50
variant: 'b'
norm_type: bn
freeze_at: 0 # res2
return_idx: [1, 2, 3]
num_stages: 4
FPN:
out_channel: 256
spatial_scales: [0.125, 0.0625, 0.03125]
extra_stage: 2
has_extra_convs: True
use_c5: False
FCOSHead:
fcos_feat:
name: FCOSFeat
feat_in: 256
feat_out: 256
num_convs: 4
norm_type: "gn"
use_dcn: False
fpn_stride: [8, 16, 32, 64, 128]
prior_prob: 0.01
norm_reg_targets: True
centerness_on_reg: True
num_shift: 0.5
fcos_loss:
name: FCOSLoss
loss_alpha: 0.25
loss_gamma: 2.0
iou_loss_type: "giou"
reg_weights: 1.0
quality: "iou"
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
### other config
epoch: *epochs
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [*epochs]
use_warmup: *use_warmup
- !LinearWarmup
start_factor: 0.001
steps: 1000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0001
type: L2
clip_grad_by_value: 1.0

View File

@@ -0,0 +1,164 @@
_BASE_: [
'../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.yml',
'../_base_/coco_detection_percent_5.yml',
]
log_iter: 20
snapshot_epoch: 5
epochs: &epochs 240 # 480 will be better
weights: output/denseteacher_fcos_r50_fpn_coco_semi005/model_final
### pretrain and warmup config, choose one and comment another
# pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/fcos_r50_fpn_2x_coco_sup005.pdparams # mAP=21.3
# semi_start_iters: 0
# ema_start_iters: 0
# use_warmup: &use_warmup False
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_cos_pretrained.pdparams
semi_start_iters: 5000
ema_start_iters: 3000
use_warmup: &use_warmup True
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 4.0, distill_loss_box: 1.0, distill_loss_quality: 1.0}
concat_sup_data: True
suppress: linear
ratio: 0.01
gamma: 2.0
test_cfg:
inference_on: teacher
### reader config
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 1}
- RandomFlip: {}
weak_aug:
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
sup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
- Gt2FCOSTarget:
object_sizes_boundary: [64, 128, 256, 512]
center_sampling_radius: 1.5
downsample_ratios: [8, 16, 32, 64, 128]
norm_reg_targets: True
unsup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
sup_batch_size: 2
unsup_batch_size: 2
shuffle: True
drop_last: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
TestReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
fuse_normalize: True
### model config
architecture: FCOS
FCOS:
backbone: ResNet
neck: FPN
fcos_head: FCOSHead
ResNet:
depth: 50
variant: 'b'
norm_type: bn
freeze_at: 0 # res2
return_idx: [1, 2, 3]
num_stages: 4
FPN:
out_channel: 256
spatial_scales: [0.125, 0.0625, 0.03125]
extra_stage: 2
has_extra_convs: True
use_c5: False
FCOSHead:
fcos_feat:
name: FCOSFeat
feat_in: 256
feat_out: 256
num_convs: 4
norm_type: "gn"
use_dcn: False
fpn_stride: [8, 16, 32, 64, 128]
prior_prob: 0.01
norm_reg_targets: True
centerness_on_reg: True
fcos_loss:
name: FCOSLoss
loss_alpha: 0.25
loss_gamma: 2.0
iou_loss_type: "giou"
reg_weights: 1.0
quality: "iou"
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
### other config
epoch: *epochs
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [*epochs]
use_warmup: *use_warmup
- !LinearWarmup
start_factor: 0.001
steps: 1000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0001
type: L2
clip_grad_by_value: 1.0

View File

@@ -0,0 +1,169 @@
_BASE_: [
'../../fcos/fcos_r50_fpn_iou_multiscale_2x_coco.yml',
'../_base_/coco_detection_percent_10.yml',
]
log_iter: 50
snapshot_epoch: 5
epochs: &epochs 240
weights: output/denseteacher_fcos_r50_fpn_coco_semi010/model_final
### pretrain and warmup config, choose one and comment another
# pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/fcos_r50_fpn_2x_coco_sup010.pdparams # mAP=26.3
# semi_start_iters: 0
# ema_start_iters: 0
# use_warmup: &use_warmup False
pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ResNet50_cos_pretrained.pdparams
semi_start_iters: 5000
ema_start_iters: 3000
use_warmup: &use_warmup True
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 4.0, distill_loss_box: 1.0, distill_loss_quality: 1.0}
concat_sup_data: True
suppress: linear
ratio: 0.01
gamma: 2.0
test_cfg:
inference_on: teacher
### reader config
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomResize: {target_size: [[640, 1333], [672, 1333], [704, 1333], [736, 1333], [768, 1333], [800, 1333]], keep_ratio: True, interp: 1}
- RandomFlip: {}
weak_aug:
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
sup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
- Gt2FCOSTarget:
object_sizes_boundary: [64, 128, 256, 512]
center_sampling_radius: 1.5
downsample_ratios: [8, 16, 32, 64, 128]
num_shift: 0. # default 0.5
multiply_strides_reg_targets: False
norm_reg_targets: True
unsup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
sup_batch_size: 2
unsup_batch_size: 2
shuffle: True
drop_last: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
TestReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [800, 1333], keep_ratio: True, interp: 1}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: True}
- Permute: {}
batch_transforms:
- PadBatch: {pad_to_stride: 32}
batch_size: 1
fuse_normalize: True
### model config
architecture: FCOS
FCOS:
backbone: ResNet
neck: FPN
fcos_head: FCOSHead
ResNet:
depth: 50
variant: 'b'
norm_type: bn
freeze_at: 0 # res2
return_idx: [1, 2, 3]
num_stages: 4
FPN:
out_channel: 256
spatial_scales: [0.125, 0.0625, 0.03125]
extra_stage: 2
has_extra_convs: True
use_c5: False
FCOSHead:
fcos_feat:
name: FCOSFeat
feat_in: 256
feat_out: 256
num_convs: 4
norm_type: "gn"
use_dcn: False
fpn_stride: [8, 16, 32, 64, 128]
prior_prob: 0.01
norm_reg_targets: True
centerness_on_reg: True
num_shift: 0. # default 0.5
multiply_strides_reg_targets: False
sqrt_score: False
fcos_loss:
name: FCOSLoss
loss_alpha: 0.25
loss_gamma: 2.0
iou_loss_type: "giou"
reg_weights: 1.0
quality: "iou"
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 100
score_threshold: 0.025
nms_threshold: 0.6
### other config
epoch: *epochs
LearningRate:
base_lr: 0.01
schedulers:
- !PiecewiseDecay
gamma: 0.1
milestones: [*epochs]
use_warmup: *use_warmup
- !LinearWarmup
start_factor: 0.001
steps: 1000
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0001
type: L2
clip_grad_by_value: 1.0

View File

@@ -0,0 +1,44 @@
_BASE_: [
'denseteacher_fcos_r50_fpn_coco_semi010.yml',
]
log_iter: 50
snapshot_epoch: 5
epochs: &epochs 240
weights: output/denseteacher_fcos_r50_fpn_coco_semi010_lsj/model_final
### reader config
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
# large-scale jittering
- RandomResize: {target_size: [[400, 1333], [1200, 1333]], keep_ratio: True, interp: 1, random_range: True}
- RandomFlip: {}
weak_aug:
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225], is_scale: true}
sup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
- Gt2FCOSTarget:
object_sizes_boundary: [64, 128, 256, 512]
center_sampling_radius: 1.5
downsample_ratios: [8, 16, 32, 64, 128]
num_shift: 0. # default 0.5
multiply_strides_reg_targets: False
norm_reg_targets: True
unsup_batch_transforms:
- Permute: {}
- PadBatch: {pad_to_stride: 32}
sup_batch_size: 2
unsup_batch_size: 2
shuffle: True
drop_last: True

View File

@@ -0,0 +1,151 @@
_BASE_: [
'../../ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml',
'../_base_/coco_detection_percent_5.yml',
]
log_iter: 50
snapshot_epoch: 5
weights: output/denseteacher_ppyoloe_plus_crn_l_coco_semi005/model_final
epochs: &epochs 200
cosine_epochs: &cosine_epochs 240
### pretrain and warmup config, choose one and comment another
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_l_80e_coco_sup005.pdparams # mAP=42.9
semi_start_iters: 0
ema_start_iters: 0
use_warmup: &use_warmup False
# pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ppyoloe_crn_l_obj365_pretrained.pdparams
# semi_start_iters: 5000
# ema_start_iters: 3000
# use_warmup: &use_warmup True
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 1.0, distill_loss_iou: 2.5, distill_loss_dfl: 0., distill_loss_contrast: 0.1}
contrast_loss:
temperature: 0.2
alpha: 0.9
smooth_iter: 100
concat_sup_data: True
suppress: linear
ratio: 0.01
test_cfg:
inference_on: teacher
### reader config
batch_size: &batch_size 8
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomFlip: {}
- RandomCrop: {} # unsup will be fake gt_boxes
weak_aug:
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
sup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
- PadGT: {}
unsup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
sup_batch_size: *batch_size
unsup_batch_size: *batch_size
shuffle: True
drop_last: True
collate_batch: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 1
### model config
architecture: PPYOLOE
norm_type: sync_bn
ema_black_list: ['proj_conv.weight']
custom_black_list: ['reduce_mean']
PPYOLOE:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
eval_size: ~ # means None, but not str 'None'
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: -1 #
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
### other config
epoch: *epochs
LearningRate:
base_lr: 0.01
schedulers:
- !CosineDecay
max_epochs: *cosine_epochs
use_warmup: *use_warmup
- !LinearWarmup
start_factor: 0.001
epochs: 3
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005 # dt-fcos 0.0001
type: L2
clip_grad_by_norm: 1.0 # dt-fcos clip_grad_by_value

View File

@@ -0,0 +1,151 @@
_BASE_: [
'../../ppyoloe/ppyoloe_plus_crn_l_80e_coco.yml',
'../_base_/coco_detection_percent_10.yml',
]
log_iter: 50
snapshot_epoch: 5
weights: output/denseteacher_ppyoloe_plus_crn_l_coco_semi010/model_final
epochs: &epochs 200
cosine_epochs: &cosine_epochs 240
### pretrain and warmup config, choose one and comment another
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_l_80e_coco_sup010.pdparams # mAP=45.7
semi_start_iters: 0
ema_start_iters: 0
use_warmup: &use_warmup False
# pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ppyoloe_crn_l_obj365_pretrained.pdparams
# semi_start_iters: 5000
# ema_start_iters: 3000
# use_warmup: &use_warmup True
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 1.0, distill_loss_iou: 2.5, distill_loss_dfl: 0., distill_loss_contrast: 0.1}
contrast_loss:
temperature: 0.2
alpha: 0.9
smooth_iter: 100
concat_sup_data: True
suppress: linear
ratio: 0.01
test_cfg:
inference_on: teacher
### reader config
batch_size: &batch_size 8
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomFlip: {}
- RandomCrop: {} # unsup will be fake gt_boxes
weak_aug:
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
sup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
- PadGT: {}
unsup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
sup_batch_size: *batch_size
unsup_batch_size: *batch_size
shuffle: True
drop_last: True
collate_batch: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 1
### model config
architecture: PPYOLOE
norm_type: sync_bn
ema_black_list: ['proj_conv.weight']
custom_black_list: ['reduce_mean']
PPYOLOE:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
eval_size: ~ # means None, but not str 'None'
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: -1 #
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
### other config
epoch: *epochs
LearningRate:
base_lr: 0.01
schedulers:
- !CosineDecay
max_epochs: *cosine_epochs
use_warmup: *use_warmup
- !LinearWarmup
start_factor: 0.001
epochs: 3
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005 # dt-fcos 0.0001
type: L2
clip_grad_by_norm: 1.0 # dt-fcos clip_grad_by_value

View File

@@ -0,0 +1,151 @@
_BASE_: [
'../../ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml',
'../_base_/coco_detection_percent_5.yml',
]
log_iter: 50
snapshot_epoch: 5
weights: output/denseteacher_ppyoloe_plus_crn_s_coco_semi005/model_final
epochs: &epochs 200
cosine_epochs: &cosine_epochs 240
### pretrain and warmup config, choose one and comment another
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_s_80e_coco_sup005.pdparams # mAP=32.8
semi_start_iters: 0
ema_start_iters: 0
use_warmup: &use_warmup False
# pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ppyoloe_crn_s_obj365_pretrained.pdparams
# semi_start_iters: 5000
# ema_start_iters: 3000
# use_warmup: &use_warmup True
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 1.0, distill_loss_iou: 2.5, distill_loss_dfl: 0., distill_loss_contrast: 0.1}
contrast_loss:
temperature: 0.2
alpha: 0.9
smooth_iter: 100
concat_sup_data: True
suppress: linear
ratio: 0.01
test_cfg:
inference_on: teacher
### reader config
batch_size: &batch_size 8
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomFlip: {}
- RandomCrop: {} # unsup will be fake gt_boxes
weak_aug:
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
sup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
- PadGT: {}
unsup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
sup_batch_size: *batch_size
unsup_batch_size: *batch_size
shuffle: True
drop_last: True
collate_batch: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 1
### model config
architecture: PPYOLOE
norm_type: sync_bn
ema_black_list: ['proj_conv.weight']
custom_black_list: ['reduce_mean']
PPYOLOE:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
eval_size: ~ # means None, but not str 'None'
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: -1 #
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
### other config
epoch: *epochs
LearningRate:
base_lr: 0.01
schedulers:
- !CosineDecay
max_epochs: *cosine_epochs
use_warmup: *use_warmup
- !LinearWarmup
start_factor: 0.001
epochs: 3
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005 # dt-fcos 0.0001
type: L2
clip_grad_by_norm: 1.0 # dt-fcos clip_grad_by_value

View File

@@ -0,0 +1,151 @@
_BASE_: [
'../../ppyoloe/ppyoloe_plus_crn_s_80e_coco.yml',
'../_base_/coco_detection_percent_10.yml',
]
log_iter: 50
snapshot_epoch: 5
weights: output/denseteacher_ppyoloe_plus_crn_s_coco_semi010/model_final
epochs: &epochs 200
cosine_epochs: &cosine_epochs 240
### pretrain and warmup config, choose one and comment another
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/ppyoloe_plus_crn_s_80e_coco_sup010.pdparams # mAP=35.3
semi_start_iters: 0
ema_start_iters: 0
use_warmup: &use_warmup False
# pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/ppyoloe_crn_s_obj365_pretrained.pdparams
# semi_start_iters: 5000
# ema_start_iters: 3000
# use_warmup: &use_warmup True
### global config
use_simple_ema: True
ema_decay: 0.9996
ssod_method: DenseTeacher
DenseTeacher:
train_cfg:
sup_weight: 1.0
unsup_weight: 1.0
loss_weight: {distill_loss_cls: 1.0, distill_loss_iou: 2.5, distill_loss_dfl: 0., distill_loss_contrast: 0.1}
contrast_loss:
temperature: 0.2
alpha: 0.9
smooth_iter: 100
concat_sup_data: True
suppress: linear
ratio: 0.01
test_cfg:
inference_on: teacher
### reader config
batch_size: &batch_size 8
worker_num: 2
SemiTrainReader:
sample_transforms:
- Decode: {}
- RandomDistort: {}
- RandomExpand: {fill_value: [123.675, 116.28, 103.53]}
- RandomFlip: {}
- RandomCrop: {} # unsup will be fake gt_boxes
weak_aug:
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
strong_aug:
- StrongAugImage: {transforms: [
RandomColorJitter: {prob: 0.8, brightness: 0.4, contrast: 0.4, saturation: 0.4, hue: 0.1},
RandomErasingCrop: {},
RandomGaussianBlur: {prob: 0.5, sigma: [0.1, 2.0]},
RandomGrayscale: {prob: 0.2},
]}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], is_scale: true, norm_type: none}
sup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
- PadGT: {}
unsup_batch_transforms:
- BatchRandomResize: {target_size: [640], random_size: True, random_interp: True, keep_ratio: False}
- Permute: {}
sup_batch_size: *batch_size
unsup_batch_size: *batch_size
shuffle: True
drop_last: True
collate_batch: True
EvalReader:
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 2
TestReader:
inputs_def:
image_shape: [3, 640, 640]
sample_transforms:
- Decode: {}
- Resize: {target_size: [640, 640], keep_ratio: False, interp: 2}
- NormalizeImage: {mean: [0., 0., 0.], std: [1., 1., 1.], norm_type: none}
- Permute: {}
batch_size: 1
### model config
architecture: PPYOLOE
norm_type: sync_bn
ema_black_list: ['proj_conv.weight']
custom_black_list: ['reduce_mean']
PPYOLOE:
backbone: CSPResNet
neck: CustomCSPPAN
yolo_head: PPYOLOEHead
post_process: ~
eval_size: ~ # means None, but not str 'None'
PPYOLOEHead:
fpn_strides: [32, 16, 8]
grid_cell_scale: 5.0
grid_cell_offset: 0.5
static_assigner_epoch: -1 #
use_varifocal_loss: True
loss_weight: {class: 1.0, iou: 2.5, dfl: 0.5}
static_assigner:
name: ATSSAssigner
topk: 9
assigner:
name: TaskAlignedAssigner
topk: 13
alpha: 1.0
beta: 6.0
nms:
name: MultiClassNMS
nms_top_k: 1000
keep_top_k: 300
score_threshold: 0.01
nms_threshold: 0.7
### other config
epoch: *epochs
LearningRate:
base_lr: 0.01
schedulers:
- !CosineDecay
max_epochs: *cosine_epochs
use_warmup: *use_warmup
- !LinearWarmup
start_factor: 0.001
epochs: 3
OptimizerBuilder:
optimizer:
momentum: 0.9
type: Momentum
regularizer:
factor: 0.0005 # dt-fcos 0.0001
type: L2
clip_grad_by_norm: 1.0 # dt-fcos clip_grad_by_value