2021年6月8日 星期二

Jetson nano DeepStream-5.1 YOLOv4

Jetson nano DeepStream-5.1 YOLOv4

Download pytorch-YOLOv4

  1. git clone https://github.com/Tianxiaomo/pytorch-YOLOv4.git
  2. cd pytorch-YOLOv4

Download yolov4.cfg & yolov4.weight

  1. wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg
  2. wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights

Edit yolov4.cfg

  1. [net]
  2. #batch=64
  3. #subdivisions=8
  4. batch=1
  5. subdivisions=1
  6. # Training
  7. #width=512
  8. #height=512
  9. #width=608
  10. #height=608
  11. width=416
  12. height=416

Install torch

  1. pip3 install torch
  2. pip3 install torchvision
  3. pip3 install onnxruntime

Install onnx

  1. sudo apt-get install protobuf-compiler libprotoc-dev
  2. pip3 install onnx -i https://pypi.doubanio.com/simple/

Enable swap

  1. sudo fallocate -l 4.0G /swapfile
  2. sudo chmod 600 /swapfile
  3. sudo mkswap /swapfile
  4. sudo swapon /swapfile

Transform darknet to onnx

  1. export OMP_NUM_THREADS=1
  2. python3 demo_darknet2onnx.py yolov4.cfg yolov4.weights ./data/giraffe.jpg 1

Transform onnx to TensorRT engine

  1. /usr/src/tensorrt/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx \
  2. --explicitBatch --saveEngine=yolov4_1_3_416_416_fp16.engine \
  3. --workspace=2048 --fp16

Download yolov4_deepstream

  1. cd /opt/nvidia/deepstream/deepstream/sources/
  2. git clone https://github.com/NVIDIA-AI-IOT/yolov4_deepstream.git
  3. cd yolov4_deepstream/deepstream_yolov4

Copy TensorRT engine

  1. cp $HOME/pytorch-YOLOv4/yolov4_1_3_416_416_fp16.engine .

Edit deepstream_app_config_yoloV4.txt

  1. model-engine-file=yolov4_1_3_416_416_fp16.engine

Edit config_infer_primary_yoloV4.txt

  1. model-engine-file=yolov4_1_3_416_416_fp16.engine
  2. network-mode=2

Build

  1. export CUDA_VER=10.0
  2. make -C nvdsinfer_custom_impl_Yolo

Run

  1. unset DISPLAY
  2. rm -rf $HOME/.cache/gstreamer-1.0/registry.aarch64.bin
  1. sudo route add -net 224.0.0.0 netmask 255.0.0.0 wlan9
  1. deepstream-app -c deepstream_app_config_yoloV4.txt

Console output

  1. Unknown or legacy key specified 'is-classifier' for group [property]
  2. 0:00:09.526815556 25328 0x21dc8d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/yolov4_1_3_416_416_fp16.engine
  3. INFO: [Implicit Engine Info]: layers num: 3
  4. 0 INPUT kFLOAT input 3x416x416
  5. 1 OUTPUT kFLOAT boxes 10647x1x4
  6. 2 OUTPUT kFLOAT confs 10647x80
  7. 0:00:09.527013842 25328 0x21dc8d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/yolov4_1_3_416_416_fp16.engine
  8. 0:00:09.649444159 25328 0x21dc8d0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/config_infer_primary_yoloV4.txt sucessfully
  9. Runtime commands:
  10. h: Print this help
  11. q: Quit
  12. p: Pause
  13. r: Resume
  14. **PERF: FPS 0 (Avg)
  15. **PERF: 0.00 (0.00)
  16. ** INFO: <bus_callback:181>: Pipeline ready
  17. Opening in BLOCKING MODE
  18. Opening in BLOCKING MODE
  19. NvMMLiteOpen : Block : BlockType = 261
  20. NVMEDIA: Reading vendor.tegra.display-size : status: 6
  21. NvMMLiteBlockCreate : Block : BlockType = 261
  22. ** INFO: <bus_callback:167>: Pipeline running
  23. **PERF: 4.92 (4.82)
  24. **PERF: 4.92 (4.92)
  25. **PERF: 4.92 (4.88)