duan8Detection/README.md
2026-01-07 15:40:08 +08:00

11 KiB
Raw Permalink Blame History

YOLOv8 TensorRT 视频检测系统

该程序主要分为以下几个模块:

1. 配置与初始化模块

  • 读取配置文件config.yaml
  • 初始化各种配置参数(视频路径、模型路径、检测类别等)
  • 初始化MinIO客户端、人脸识别服务等

2. 视频流处理模块

  • RTSP视频流读取与缓冲
  • 视频帧预处理调整为1280x720分辨率
  • 多线程视频流处理

3. YOLOv8推理模块

  • 位置: d8_5.py 文件中的 YoLov8TRT 类 (第617-887行)
  • 功能TensorRT引擎加载与推理、图像预处理与后处理、目标检测安全帽、未佩戴安全帽、鞋子等
  • 推理调用位置第1028行 batch_image_raw, r_list, box_list, msg_list = self.yolov8_wrapper.infer(frame)

4. 人脸识别模块

  • 人脸检测与识别
  • 人脸识别结果处理
  • 人脸识别相关告警

5. 告警与存储模块

6. HLS流输出模块

  • 使用FFmpeg将处理后的视频流转换为HLS格式m3u8
  • 实时视频流输出

7. 多线程管理模块

  • 视频输入线程
  • 推理检测线程
  • 人脸识别线程
  • 其他辅助线程

测试环境配置

在测试环境中,为避免网络连接问题,已注释以下功能:

  1. Token获取功能get_token 函数第157-179行
  2. 人脸识别功能第1003-1018行

这些修改使得程序可以在无网络连接的测试环境中正常运行,仅保留核心的视频检测功能。

原始项目信息

The Pytorch implementation is ultralytics/yolov8.

The tensorrt code is derived from xiaocao-tian/yolov8_tensorrt

Contributors

Requirements

  • TensorRT 8.0+
  • OpenCV 3.4.0+
  • ultralytics<=8.2.103

Different versions of yolov8

Currently, we support yolov8

Config

  • Choose the model n/s/m/l/x/n2/s2/m2/l2/x2/n6/s6/m6/l6/x6 from command line arguments.
  • Check more configs in include/config.h

How to Run, yolov8n as example

  1. generate .wts from pytorch with .pt, or download .wts from model zoo
// download https://github.com/ultralytics/assets/releases/yolov8n.pt
// download https://github.com/lindsayshuo/yolov8-p2/releases/download/VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.pt (only for 10 cls p2 model)
cp {tensorrtx}/yolov8/gen_wts.py {ultralytics}/ultralytics
cd {ultralytics}/ultralytics
python gen_wts.py -w yolov8n.pt -o yolov8n.wts -t detect
// a file 'yolov8n.wts' will be generated.


// For p2 model
// download https://github.com/lindsayshuo/yolov8_p2_tensorrtx/releases/download/VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last/VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.pt (only for 10 cls p2 model)
cd {ultralytics}/ultralytics
python gen_wts.py -w VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.pt -o VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.wts -t detect (only for  10 cls p2 model)
// a file 'VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.wts' will be generated.

// For yolov8_5u_det model
// download https://github.com/ultralytics/assets/releases/yolov5nu.pt
cd {ultralytics}/ultralytics
python gen_wts.py -w yolov5nu.pt -o yolov5nu.wts -t detect
// a file 'yolov5nu.wts' will be generated.

  1. build tensorrtx/yolov8 and run

Detection

cd {tensorrtx}/yolov8/
mkdir build
cd build
cp {ultralytics}/ultralytics/yolov8.wts {tensorrtx}/yolov8/build
cmake ..
make
sudo ./yolov8_det -s [.wts] [.engine] [n/s/m/l/x/n2/s2/m2/l2/x2/n6/s6/m6/l6/x6]  // serialize model to plan file
sudo ./yolov8_det -d [.engine] [image folder]  [c/g] // deserialize and run inference, the images in [image folder] will be processed.

// For example yolov8n
sudo ./yolov8_det -s yolov8n.wts yolov8.engine n
sudo ./yolov8_det -d yolov8n.engine ../images c //cpu postprocess
sudo ./yolov8_det -d yolov8n.engine ../images g //gpu postprocess


// For p2 model:
// change the  "const static int kNumClass" in config.h to 10;
sudo ./yolov8_det -s VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.wts VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.engine x2
wget https://github.com/lindsayshuo/yolov8-p2/releases/download/VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last/0000008_01999_d_0000040.jpg
cp -r 0000008_01999_d_0000040.jpg ../images
sudo ./yolov8_det -d VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.engine ../images c //cpu postprocess
sudo ./yolov8_det -d VisDrone_train_yolov8x_p2_bs1_epochs_100_imgsz_1280_last.engine ../images g //gpu postprocess

// For yolov8_5u_det(YOLOv5u with the anchor-free, objectness-free split head structure based on YOLOv8 features) model:
sudo ./yolov8_5u_det -s [.wts] [.engine] [n/s/m/l/x//n6/s6/m6/l6/x6]
sudo ./yolov8_5u_det -d yolov5xu.engine ../images c //cpu postprocess
sudo ./yolov8_5u_det -d yolov5xu.engine ../images g //gpu postprocess

Instance Segmentation

# Build and serialize TensorRT engine
./yolov8_seg -s yolov8s-seg.wts yolov8s-seg.engine s

# Download the labels file
wget -O coco.txt https://raw.githubusercontent.com/amikelive/coco-labels/master/coco-labels-2014_2017.txt

# Run inference with labels file
./yolov8_seg -d yolov8s-seg.engine ../images c coco.txt

Classification

cd {tensorrtx}/yolov8/
// Download inference images
wget  https://github.com/lindsayshuo/infer_pic/releases/download/pics/1709970363.6990473rescls.jpg
mkdir samples
cp -r  1709970363.6990473rescls.jpg samples
// Download ImageNet labels
wget https://github.com/joannzhang00/ImageNet-dataset-classes-labels/blob/main/imagenet_classes.txt

// update kClsNumClass in config.h if your model is trained on custom dataset
mkdir build
cd build
cp {ultralytics}/ultralytics/yolov8n-cls.wts {tensorrtx}/yolov8/build
cmake ..
make
sudo ./yolov8_cls -s [.wts] [.engine] [n/s/m/l/x]  // serialize model to plan file
sudo ./yolov8_cls -d [.engine] [image folder]  // deserialize and run inference, the images in [image folder] will be processed.

// For example yolov8n
sudo ./yolov8_cls -s yolov8n-cls.wts yolov8-cls.engine n
sudo ./yolov8_cls -d yolov8n-cls.engine ../samples

Pose Estimation

cd {tensorrtx}/yolov8/
// update "kPoseNumClass = 1" in config.h
mkdir build
cd build
cp {ultralytics}/ultralytics/yolov8-pose.wts {tensorrtx}/yolov8/build
cmake ..
make
sudo ./yolov8_pose -s [.wts] [.engine] [n/s/m/l/x/n2/s2/m2/l2/x2/n6/s6/m6/l6/x6]  // serialize model to plan file
sudo ./yolov8_pose -d [.engine] [image folder]  [c/g] // deserialize and run inference, the images in [image folder] will be processed.

// For example yolov8-pose
sudo ./yolov8_pose -s yolov8n-pose.wts yolov8n-pose.engine n
sudo ./yolov8_pose -d yolov8n-pose.engine ../images c //cpu postprocess
sudo ./yolov8_pose -d yolov8n-pose.engine ../images g //gpu postprocess

Oriented Bounding Boxes (OBB) Estimation

cd {tensorrtx}/yolov8/
// update "kObbNumClass = 15" "kInputH = 1024" "kInputW = 1024" in config.h
wget https://github.com/lindsayshuo/infer_pic/releases/download/pics/obb.png
mkdir images
mv obb.png ./images
mkdir build
cd build
cp {ultralytics}/ultralytics/yolov8-obb.wts {tensorrtx}/yolov8/build
cmake ..
make
sudo ./yolov8_obb -s [.wts] [.engine] [n/s/m/l/x/n2/s2/m2/l2/x2/n6/s6/m6/l6/x6]  // serialize model to plan file
sudo ./yolov8_obb -d [.engine] [image folder]  [c/g] // deserialize and run inference, the images in [image folder] will be processed.

// For example yolov8-obb
sudo ./yolov8_obb -s yolov8n-obb.wts yolov8n-obb.engine n
sudo ./yolov8_obb -d yolov8n-obb.engine ../images c //cpu postprocess
sudo ./yolov8_obb -d yolov8n-obb.engine ../images g //gpu postprocess
  1. optional, load and run the tensorrt model in python
// install python-tensorrt, pycuda, etc.
// ensure the yolov8n.engine and libmyplugins.so have been built
python yolov8_det_trt.py  # Detection
python yolov8_seg_trt.py  # Segmentation
python yolov8_cls_trt.py  # Classification
python yolov8_pose_trt.py  # Pose Estimation
python yolov8_5u_det_trt.py  # yolov8_5u_det(YOLOv5u with the anchor-free, objectness-free split head structure based on YOLOv8 features) model
python yolov8_obb_trt.py  # Oriented Bounding Boxes (OBB) Estimation

INT8 Quantization

  1. Prepare calibration images, you can randomly select 1000s images from your train set. For coco, you can also download my calibration images coco_calib from GoogleDrive or BaiduPan pwd: a9wh

  2. unzip it in yolov8/build

  3. set the macro USE_INT8 in config.h, change kInputQuantizationFolder into your image folder path and make

  4. serialize the model and test

More Information

See the readme in home page.