Jetson nano DeepStream-5.1 YOLOv4
Download pytorch-YOLOv4
git clone https://github.com/Tianxiaomo/pytorch-YOLOv4.gitcd pytorch-YOLOv4
Download yolov4.cfg & yolov4.weight
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfgwget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
Edit yolov4.cfg
[net]#batch=64#subdivisions=8batch=1subdivisions=1# Training#width=512#height=512#width=608#height=608width=416height=416
Install torch
pip3 install torchpip3 install torchvisionpip3 install onnxruntime
Install onnx
sudo apt-get install protobuf-compiler libprotoc-devpip3 install onnx -i https://pypi.doubanio.com/simple/
Enable swap
sudo fallocate -l 4.0G /swapfilesudo chmod 600 /swapfilesudo mkswap /swapfilesudo swapon /swapfile
export OMP_NUM_THREADS=1python3 demo_darknet2onnx.py yolov4.cfg yolov4.weights ./data/giraffe.jpg 1
/usr/src/tensorrt/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx \--explicitBatch --saveEngine=yolov4_1_3_416_416_fp16.engine \--workspace=2048 --fp16
Download yolov4_deepstream
cd /opt/nvidia/deepstream/deepstream/sources/git clone https://github.com/NVIDIA-AI-IOT/yolov4_deepstream.gitcd yolov4_deepstream/deepstream_yolov4
Copy TensorRT engine
cp $HOME/pytorch-YOLOv4/yolov4_1_3_416_416_fp16.engine .
Edit deepstream_app_config_yoloV4.txt
model-engine-file=yolov4_1_3_416_416_fp16.engine
Edit config_infer_primary_yoloV4.txt
model-engine-file=yolov4_1_3_416_416_fp16.enginenetwork-mode=2
Build
export CUDA_VER=10.0make -C nvdsinfer_custom_impl_Yolo
Run
unset DISPLAYrm -rf $HOME/.cache/gstreamer-1.0/registry.aarch64.bin
sudo route add -net 224.0.0.0 netmask 255.0.0.0 wlan9
deepstream-app -c deepstream_app_config_yoloV4.txt
Console output
Unknown or legacy key specified 'is-classifier' for group [property]0:00:09.526815556 25328 0x21dc8d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/yolov4_1_3_416_416_fp16.engineINFO: [Implicit Engine Info]: layers num: 30 INPUT kFLOAT input 3x416x416 1 OUTPUT kFLOAT boxes 10647x1x4 2 OUTPUT kFLOAT confs 10647x80 0:00:09.527013842 25328 0x21dc8d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/yolov4_1_3_416_416_fp16.engine0:00:09.649444159 25328 0x21dc8d0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_yolov4/config_infer_primary_yoloV4.txt sucessfullyRuntime commands: h: Print this help q: Quit p: Pause r: Resume**PERF: FPS 0 (Avg) **PERF: 0.00 (0.00) ** INFO: <bus_callback:181>: Pipeline readyOpening in BLOCKING MODEOpening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 ** INFO: <bus_callback:167>: Pipeline running**PERF: 4.92 (4.82) **PERF: 4.92 (4.92) **PERF: 4.92 (4.88)
沒有留言:
張貼留言