2023年2月24日 星期五

 ELRS – ExpressLRS


https://www.expresslrs.org/


ELRS 是一個 open source 遙控協議,最初專為四軸穿越機而打造,藉由最新的2.4G / 900MHz無線電通訊科技達到在相同的發射功率達到更遠的距離,更高的資料傳輸率以及更低的延遲.有效遙控距離可達數十公里,資料傳輸率最高可達1000Hz, 相關的參數都可依據應用場景自由調整. 


Packet rate (資料傳輸率)

50Hz, 150Hz, 250Hz, 500Hz, D250Hz, D500Hz, F500Hz, and F1000Hz 


TX Power (發射功率)

10mW ~ 250mW


遙控協議上可選擇合適的開關模式, 例如1~4 ch 全解析度(10 bits), ch 5 兩位置(2 pos), ch 6~9 (6 pos)在穿越機上已足夠,輸率最高可設定至1000Hz. 固定翼機則可選擇1~4, 6~9 ch 全解析度(10 bits), ch 5 兩位置(2 pos), 輸率最高可設定為100或333Hz. 


Switch Configuration Modes (開關模式)

Hybrid, Wide, Full Res 8 ch, Full Res 16 ch - rate/2, Full Res 12 ch Mixed


https://www.expresslrs.org/software/switch-config/


發射機已有內建ELRS的產品, 或是JR Bar形式的發射模組搭配OpenTX / EdgeTX等發射機.


接收機可輸出S.BUS / CRSF, 需要接伺服機可選擇有PWM輸出的,可設定PWM的頻率(數位Servo已可支援到333Hz以上)及輸出對應的輸入通道.


ELRS專案目前很活躍,新的版本一直在推出,因此在拿到新的發射機及接收機時往往需要更新firmware.

以下是Radiomaster TX12 MKII發射機(內建ELRS RF模組)及MATEKSYS ELRS PWM-R24-P6接收機更新firmware的過程紀錄




ELRS Firmware upgrade


更新2.4G發射模組韌體及接收機韌體,兩個版本要相同.這裡使用的版本是3.2.0 


使用 ExpressLRS Configurator 這工具來更新及設定 發射模組 及 接收機, 可選從擇Wifi 或Uart 更新.它會依據所選的廠牌型號從網路上下載所需要的軟體套件及原始碼並在本機電腦編譯出韌體,接著將發射機或接收機的Wifi AP 啟動,本機電腦連線上會自動跳出設定畫面,給定好韌體檔案位置進行更新即可.


https://www.expresslrs.org/quick-start/installing-configurator/


1.Radiomaster TX12 MKII內建發射模組ELRS Firmware Upgrade


首先在遙控器設定畫面開啟一個新的Model,啟動內建ELRS RF模組.


KEY MOD -> [ MODESEL ] -> KEY PAGE -> [ SETUP ]

Enable Internal RF

KEY RTN




按 MOD 鍵


按 PAGE> 鍵 選至SETUP頁面


旋轉滾輪至Internal RF選項,選取Enable


回到遙控器主畫面進入ELRS設定,啟動內建Wifi AP作為更新韌體之用.

KEY SYS → [ TOOLS ] → [ 02 ExpressLRS ] → [ Wifi Connectivity ] → [ Enable Wifi ] 











在本機電腦(Ubuntu 20.04)啟動安裝好的ExpressLRS Configurator



選擇Target – RadioMaster 2.4GHz / RadioMaster TX12 2400 TX

方式Method – WiFi

 


設定完成按BUILD,就會自動把firmware產生出來



第一次BUILD會花大約十幾分鐘的時間,因為會從網路上下載所需的軟體套件及Source code.之後BUILD就很快了.完成後會跳出韌體所在位置的檔案管理視窗


本機電腦Wifi連線到ELRS發射模組


SSID : ExpressLRS TX

Password : ExpressLRS



連線上之後等幾秒鐘會自動跳出設定畫面




給定firmware然後Update





到這裡就完成發射機Firmware Upgrade



1.MATEKSYS ELRS PWM-R24-P6接收機 Firmware Upgrade


http://www.mateksys.com/?portfolio=elrs-r24-p6#tab-id-2


更新Firmware一樣是用ExpressLRS Configurator

依據以上原廠網頁說明

選擇Target – DIY 2.4GHz / DIY 2400 RX PWMP EX

方式Method – WiFi





按BUILD產生Firmware




接著將接收機上電,這時上面的紅色LED燈會慢閃,等待60秒會變快閃,這時接收機的Wifi AP就已經啟動.


SSID : ExpressLRS RX

Password : ExpressLRS 或著是 expresslrs


將本機電腦Wifi連線上接收機,連線上之後等幾秒鐘會自動跳出設定畫面.



給定firmware然後Update



到這裡就完成接收機Firmware Upgrade





PWM輸出的接收機需要調整Servo channel的對映,主要是ch5在ELRS中固定作為Arm / Disarm的開關,是一個只有on/off兩位置的輸出.修改 Output 5 → ch6 / Output 6 → ch7.

市售的數位Servo大多已經支援333MHz以上的PWM data rate. 預設Output Mode是50Hz等於是犧牲了ELRS低延遲效果,Output Mode全部都改為333MHz.




參考資料


https://oscarliang.com/setup-expresslrs-2-4ghz/


ExpressLRS Configurator v1.6.0


For MATEKSYS R24-P6















2022年2月7日 星期一

Install TensorRT on Ubuntu 20.04



It takes me a lot of time to get TensorRT working with Ubuntu 20.04 on my laptop.
There are some issues makes it even harder @@

1.With the default nVidia driver from Ubuntu 20.04, the laptop failed to resume after suspend (hibernate).
The solution is to reinstall older version. T
his lead cuda version limited to 10.2

sudo apt purge nvidia-* 
sudo apt autoremove
sudo apt install nvidia-driver-450-server

2.Ubuntu 20.04 default is python3.8 and TensorRT works with python3.6 

3.TensorRT doesn't support Ubuntu 20.04 with cuda 10.2

The solution is to use python virtualenv to install TensorRT  

sudo apt install python3.6-venv

mkdir venv/
cd venv

### Create virtual environment in path venv/tensorrt

python3.6 -m venv tensorrt

source tensorrt/bin/activate

pip install --upgrade pip

python3 -m pip install numpy onnx

### Download & extract TensorRT-7.2.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.1.tar.gz

cd Downloads/

sudo cp -a TensorRT-7.2.3.4 /usr/local/

export LD_LIBRARY_PATH=/usr/local/TensorRT-7.2.3.4/lib

cd TensorRT-7.2.3.4/python/

python3 -m pip install tensorrt-7.2.3.4-cp36-none-linux_x86_64.whl

cd ../uff

python3 -m pip install uff-0.6.9-py2.py3-none-any.whl uff-0.6.9-py2.py3-none-any.whl

which convert-to-uff

cd ../graphsurgeon/

python3 -m pip install graphsurgeon-0.4.5-py2.py3-none-any.whl

cd ../onnx_graphsurgeon/

python3 -m pip install onnx_graphsurgeon-0.2.6-py2.py3-none-any.whl

cd ../..

### Download libcudnn8_8.2.1.32-1+cuda10.2_amd64.deb & libcudnn8-dev_8.2.1.32-1+cuda10.2_amd64.deb

sudo dpkg -i ./libcudnn8_8.2.1.32-1+cuda10.2_amd64.deb
sudo dpkg -i ./libcudnn8-dev_8.2.1.32-1+cuda10.2_amd64.deb

pip3 install torch
pip3 install torchvision
pip3 install matplotlib

pip3 install --global-option=build_ext --global-option="-I/usr/local/cuda-10.2/targets/x86_64-linux/include/" --global-option="-L/usr/local/cuda-10.2/targets/x86_64-linux/lib/" pycuda

pip3 install opencv-python
pip3 install albumentations==0.5.2



4.Using the tensorrt venv

cd venv
source tensorrt/bin/activate




2021年9月27日 星期一

TensorRT custom ONNX model c++

This article is intent to describe how to run custom ONNX model with TensorRT. By modify TensorRT sample code sampleOnnxMnist to make it happen.

Hardware - nVidia Jetson NX Xavier
Software - Jetpack 4.6 / TensorRT 8
TensorRT sample - /usr/src/tensorrt/samples/sampleOnnxMNIST
ONNX model - https://github.com/PINTO0309/PINTO_model_zoo/tree/main/081_MiDaS_v2

The sampleOnnxMNIST is to detect hand write numbers from 0 ~ 9. I will make some change on this sample to get it work with MiDasV2 depth inference.

1.MiDasV2 Model

get PINTO_model_zoo and download MiDasV2 ONNX model

git clone https://github.com/PINTO0309/PINTO_model_zoo.git
cd PINTO_model_zoo/081_MiDaS_v2
./download_256x256.sh
cd

After successful downloading, file PINTO_model_zoo/081_MiDaS_v2/saved_model/model_float32.onnx is the custom ONNX model.

Now we need to know the input and output dimensions of the model. A tool netron will help it.

pip install netron
export PATH=$PATH:${HOME}/.local/bin
netron PINTO_model_zoo/081_MiDaS_v2/saved_model/model_float32.onnx

The netron will display all layers of the model on browser. Open url localhost:8080 from browser


Now we know the input layer name is inputs:0 and it's dimension is 1 x 256 x 256 x 3
Since the model requires an image input, so I guess the four dimension means batch x height x width x channel.


Go to bottom of the page. the output name is Identity:0 and it's dimension is 1 x 256 x 256. Since the model output depth map, so I guess the three dimension means batch x height x width

That's all we need to know about the model.

2.sample code

sudo -s

Copy ONNX model file to tensorrt sample folder

mkdir /usr/src/tensorrt/data/midas
cp PINTO_model_zoo/081_MiDaS_v2/saved_model/model_float32.onnx /usr/src/tensorrt/data/midas/

Copy source image 

cp PINTO_model_zoo/081_MiDaS_v2/openvino/midasv2_small_256x256/FP16/dog.jpg /usr/src/tensorrt/bin

Create new sample from sampleOnnxMNIST

cd /usr/src/tensorrt/samples
cp -a sampleOnnxMNIST sampleOnnxMiDasV2
cd sampleOnnxMiDasV2
mv sampleOnnxMNIST.cpp sampleOnnxMiDasV2.cpp

Modify Makefile

--- ../sampleOnnxMNIST/Makefile 2021-06-26 08:17:31.000000000 +0800
+++ Makefile 2021-09-27 17:10:13.212404761 +0800
@@ -1,6 +1,8 @@
-OUTNAME_RELEASE = sample_onnx_mnist
-OUTNAME_DEBUG   = sample_onnx_mnist_debug
+OUTNAME_RELEASE = sample_onnx_midasv2
+OUTNAME_DEBUG   = sample_onnx_midasv2_debug
 EXTRA_DIRECTORIES = ../common
 SAMPLE_DIR_NAME = $(shell basename $(dir $(abspath $(firstword $(MAKEFILE_LIST)))))
+COMMON_FLAGS = -I/usr/include/opencv4/opencv -I/usr/include/opencv4
+EXTRA_LIBS = -L/usr/lib/aarch64-linux-gnu/ -lopencv_dnn -lopencv_gapi -lopencv_highgui -lopencv_ml -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_video -lopencv_calib3d -lopencv_features2d -lopencv_flann -lopencv_videoio -lopencv_imgcodecs -lopencv_imgproc -lopencv_core
 MAKEFILE ?= ../Makefile.config
 include $(MAKEFILE)

Modify ../Makefile.config to get opencv correctly linked

$(OUTDIR)/$(OUTNAME_RELEASE) : $(OBJS) $(CUOBJS)
        $(ECHO) Linking: $@
-         $(AT)$(CC) -o $@ $(LFLAGS) -Wl,--start-group $(LIBS) $^ -Wl,--end-group
+        $(AT)$(CC) -o $@ $(LFLAGS) -Wl,--start-group $(LIBS) $^ -Wl,--end-group $(EXTRA_LIBS)

$(OUTDIR)/$(OUTNAME_DEBUG) : $(DOBJS) $(CUDOBJS)
        $(ECHO) Linking: $@
-        $(AT)$(CC) -o $@ $(LFLAGSD) -Wl,--start-group $(DLIBS) $^ -Wl,--end-group
+        $(AT)$(CC) -o $@ $(LFLAGSD) -Wl,--start-group $(DLIBS) $^ -Wl,--end-group $(EXTRA_LIBS)

The whole story is to read dog.jpg as input of depth inference and display image of dog.jpg and depth map of result on screen

Source code of sampleOnnxMiDadV2.cpp 

3.Build & run

make
cd ../../bin
./sample_onnx_midasv2




4.Diff from sampleOnnxMNIST.cpp

--- ../sampleOnnxMNIST/sampleOnnxMNIST.cpp 2021-06-26 08:17:31.000000000 +0800
+++ sampleOnnxMiDasV2.cpp 2021-09-27 16:49:44.045143887 +0800
@@ -15,11 +15,11 @@
  */
 
 //!
-//! sampleOnnxMNIST.cpp
-//! This file contains the implementation of the ONNX MNIST sample. It creates the network using
-//! the MNIST onnx model.
+//! sampleOnnxMiDasV2.cpp
+//! This file contains the implementation of the ONNX MiDasV2 sample. It creates the network using
+//! the MiDasV2 onnx model.
 //! It can be run with the following command line:
-//! Command: ./sample_onnx_mnist [-h or --help] [-d=/path/to/data/dir or --datadir=/path/to/data/dir]
+//! Command: ./sample_onnx_MiDasV2 [-h or --help] [-d=/path/to/data/dir or --datadir=/path/to/data/dir]
 //! [--useDLACore=<int>]
 //!
 
@@ -37,18 +37,21 @@
 #include <iostream>
 #include <sstream>
 
+#include <opencv2/opencv.hpp>
+
+
 using samplesCommon::SampleUniquePtr;
 
-const std::string gSampleName = "TensorRT.sample_onnx_mnist";
+const std::string gSampleName = "TensorRT.sample_onnx_midas";
 
-//! \brief  The SampleOnnxMNIST class implements the ONNX MNIST sample
+//! \brief  The SampleOnnxMiDasV2 class implements the ONNX MiDasV2 sample
 //!
 //! \details It creates the network using an ONNX model
 //!
-class SampleOnnxMNIST
+class SampleOnnxMiDasV2
 {
 public:
-    SampleOnnxMNIST(const samplesCommon::OnnxSampleParams& params)
+    SampleOnnxMiDasV2(const samplesCommon::OnnxSampleParams& params)
         : mParams(params)
         , mEngine(nullptr)
     {
@@ -74,7 +77,7 @@
     std::shared_ptr<nvinfer1::ICudaEngine> mEngine; //!< The TensorRT engine used to run the network
 
     //!
-    //! \brief Parses an ONNX model for MNIST and creates a TensorRT network
+    //! \brief Parses an ONNX model for MiDasV2 and creates a TensorRT network
     //!
     bool constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
         SampleUniquePtr<nvinfer1::INetworkDefinition>& network, SampleUniquePtr<nvinfer1::IBuilderConfig>& config,
@@ -83,23 +86,23 @@
     //!
     //! \brief Reads the input  and stores the result in a managed buffer
     //!
-    bool processInput(const samplesCommon::BufferManager& buffers);
+    bool processInput(const samplesCommon::BufferManager& buffers, cv::Mat & image);
 
     //!
     //! \brief Classifies digits and verify result
     //!
-    bool verifyOutput(const samplesCommon::BufferManager& buffers);
+    bool verifyOutput(const samplesCommon::BufferManager& buffers, cv::Mat & originImage);
 };
 
 //!
 //! \brief Creates the network, configures the builder and creates the network engine
 //!
-//! \details This function creates the Onnx MNIST network by parsing the Onnx model and builds
-//!          the engine that will be used to run MNIST (mEngine)
+//! \details This function creates the Onnx MiDasV2 network by parsing the Onnx model and builds
+//!          the engine that will be used to run MiDasV2 (mEngine)
 //!
 //! \return Returns true if the engine was created successfully and false otherwise
 //!
-bool SampleOnnxMNIST::build()
+bool SampleOnnxMiDasV2::build()
 {
     auto builder = SampleUniquePtr<nvinfer1::IBuilder>(nvinfer1::createInferBuilder(sample::gLogger.getTRTLogger()));
     if (!builder)
@@ -162,24 +165,24 @@
 
     ASSERT(network->getNbInputs() == 1);
     mInputDims = network->getInput(0)->getDimensions();
-    ASSERT(mInputDims.nbDims == 4);
+    ASSERT(mInputDims.nbDims == 4); // Input is 1 x 256 x 256 x 3 
 
     ASSERT(network->getNbOutputs() == 1);
     mOutputDims = network->getOutput(0)->getDimensions();
-    ASSERT(mOutputDims.nbDims == 2);
+    ASSERT(mOutputDims.nbDims == 3); // Output is 1 x 256 x 256
 
     return true;
 }
 
 //!
-//! \brief Uses a ONNX parser to create the Onnx MNIST Network and marks the
+//! \brief Uses a ONNX parser to create the Onnx MiDasV2 Network and marks the
 //!        output layers
 //!
-//! \param network Pointer to the network that will be populated with the Onnx MNIST network
+//! \param network Pointer to the network that will be populated with the Onnx MiDasV2 network
 //!
 //! \param builder Pointer to the engine builder
 //!
-bool SampleOnnxMNIST::constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
+bool SampleOnnxMiDasV2::constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
     SampleUniquePtr<nvinfer1::INetworkDefinition>& network, SampleUniquePtr<nvinfer1::IBuilderConfig>& config,
     SampleUniquePtr<nvonnxparser::IParser>& parser)
 {
@@ -212,9 +215,9 @@
 //! \details This function is the main execution function of the sample. It allocates the buffer,
 //!          sets inputs and executes the engine.
 //!
-bool SampleOnnxMNIST::infer()
+bool SampleOnnxMiDasV2::infer()
 {
-    // Create RAII buffer manager object
+    // Create RAII buffer manager object
     samplesCommon::BufferManager buffers(mEngine);
 
     auto context = SampleUniquePtr<nvinfer1::IExecutionContext>(mEngine->createExecutionContext());
@@ -222,28 +225,29 @@
     {
         return false;
     }
-
+    cv::Mat image = cv::imread("dog.jpg");
+    if (image.cols == 0 || image.rows == 0)
+    {
+        printf("image is empty\n");
+        return false;
+    }
     // Read the input data into the managed buffers
     ASSERT(mParams.inputTensorNames.size() == 1);
-    if (!processInput(buffers))
+    if (!processInput(buffers, image))
     {
         return false;
     }
-
     // Memcpy from host input buffers to device input buffers
     buffers.copyInputToDevice();
-
     bool status = context->executeV2(buffers.getDeviceBindings().data());
     if (!status)
     {
         return false;
     }
-
     // Memcpy from device output buffers to host output buffers
     buffers.copyOutputToHost();
-
     // Verify results
-    if (!verifyOutput(buffers))
+    if (!verifyOutput(buffers, image))
     {
         return false;
     }
@@ -254,31 +258,30 @@
 //!
 //! \brief Reads the input and stores the result in a managed buffer
 //!
-bool SampleOnnxMNIST::processInput(const samplesCommon::BufferManager& buffers)
+bool SampleOnnxMiDasV2::processInput(const samplesCommon::BufferManager& buffers, cv::Mat & image)
 {
-    const int inputH = mInputDims.d[2];
-    const int inputW = mInputDims.d[3];
+    const int inputChannels = mInputDims.d[3];
+    const int inputH = mInputDims.d[1];
+    const int inputW = mInputDims.d[2];
 
-    // Read a random digit file
-    srand(unsigned(time(nullptr)));
-    std::vector<uint8_t> fileData(inputH * inputW);
-    mNumber = rand() % 10;
-    readPGMFile(locateFile(std::to_string(mNumber) + ".pgm", mParams.dataDirs), fileData.data(), inputH, inputW);
-
-    // Print an ascii representation
-    sample::gLogInfo << "Input:" << std::endl;
-    for (int i = 0; i < inputH * inputW; i++)
-    {
-        sample::gLogInfo << (" .:-=+*#%@"[fileData[i] / 26]) << (((i + 1) % inputW) ? "" : "\n");
-    }
-    sample::gLogInfo << std::endl;
+    printf("inputs:0 - %d x %d x %d x %d\n", mInputDims.d[0], mInputDims.d[1], mInputDims.d[2], mInputDims.d[3]);
 
-    float* hostDataBuffer = static_cast<float*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));
-    for (int i = 0; i < inputH * inputW; i++)
-    {
-        hostDataBuffer[i] = 1.0 - float(fileData[i] / 255.0);
-    }
+    cv::Mat resized_image;
+    cv::resize(image, resized_image, cv::Size(inputW, inputH));
 
+    int batchIndex = 0;
+    int batchOffset = batchIndex * inputW * inputH * inputChannels;
+    float* hostDataBuffer = static_cast<float*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));
+    // input shape [B,H,W,C]
+    // inputs:0 - 1 x 256 x 256 x 3
+        for (size_t h = 0; h < inputH; h++) {
+            for (size_t w = 0; w < inputW; w++) {
+ for (size_t c = 0; c < inputChannels; c++) {
+                hostDataBuffer[batchOffset + (h * inputW + w) * inputChannels + c] =
+                    float(float(resized_image.at<cv::Vec3b>(h, w)[c]) / 255.0); // Division 255.0 is to convert uint8_t color to float_t
+ }
+            }
+        }
     return true;
 }
 
@@ -287,39 +290,27 @@
 //!
 //! \return whether the classification output matches expectations
 //!
-bool SampleOnnxMNIST::verifyOutput(const samplesCommon::BufferManager& buffers)
+bool SampleOnnxMiDasV2::verifyOutput(const samplesCommon::BufferManager& buffers, cv::Mat & originImage )
 {
-    const int outputSize = mOutputDims.d[1];
     float* output = static_cast<float*>(buffers.getHostBuffer(mParams.outputTensorNames[0]));
-    float val{0.0f};
-    int idx{0};
-
-    // Calculate Softmax
-    float sum{0.0f};
-    for (int i = 0; i < outputSize; i++)
-    {
-        output[i] = exp(output[i]);
-        sum += output[i];
-    }
-
-    sample::gLogInfo << "Output:" << std::endl;
-    for (int i = 0; i < outputSize; i++)
-    {
-        output[i] /= sum;
-        val = std::max(val, output[i]);
-        if (val == output[i])
-        {
-            idx = i;
-        }
-
-        sample::gLogInfo << " Prob " << i << "  " << std::fixed << std::setw(5) << std::setprecision(4) << output[i]
-                         << " "
-                         << "Class " << i << ": " << std::string(int(std::floor(output[i] * 10 + 0.5f)), '*')
-                         << std::endl;
-    }
-    sample::gLogInfo << std::endl;
-
-    return idx == mNumber && val > 0.9f;
+    const int output0_row = mOutputDims.d[1];
+    const int output0_col = mOutputDims.d[2];
+    
+    printf("Identity:0 - %d x %d x %d\n", mOutputDims.d[0], mOutputDims.d[1], mOutputDims.d[2]);
+    
+    cv::Mat image = cv::Mat::zeros(cv::Size(output0_row, output0_col), CV_8U);
+    for (int row = 0; row < output0_row; row++) {
+    for (int col = 0;col < output0_col; col++) {
+        image.at<uint8_t>(row, col) = (uint8_t)(*(output + (row * output0_col) + col) / 8);
+    }
+    }
+    
+    cv::imshow("img", image);
+    cv::imshow("orgimg", originImage);
+    int key = cv::waitKey(0);
+    cv::destroyAllWindows();
+    
+ return true;
 }
 
 //!
@@ -330,16 +321,15 @@
     samplesCommon::OnnxSampleParams params;
     if (args.dataDirs.empty()) //!< Use default directories if user hasn't provided directory paths
     {
-        params.dataDirs.push_back("data/mnist/");
-        params.dataDirs.push_back("data/samples/mnist/");
+        params.dataDirs.push_back("data/midas/");
     }
     else //!< Use the data directory provided by the user
     {
         params.dataDirs = args.dataDirs;
     }
-    params.onnxFileName = "mnist.onnx";
-    params.inputTensorNames.push_back("Input3");
-    params.outputTensorNames.push_back("Plus214_Output_0");
+    params.onnxFileName = "model_float32.onnx";
+    params.inputTensorNames.push_back("inputs:0");
+    params.outputTensorNames.push_back("Identity:0");
     params.dlaCore = args.useDLACore;
     params.int8 = args.runInInt8;
     params.fp16 = args.runInFp16;
@@ -353,12 +343,12 @@
 void printHelpInfo()
 {
     std::cout
-        << "Usage: ./sample_onnx_mnist [-h or --help] [-d or --datadir=<path to data directory>] [--useDLACore=<int>]"
+        << "Usage: ./sample_onnx_MiDasV2 [-h or --help] [-d or --datadir=<path to data directory>] [--useDLACore=<int>]"
         << std::endl;
     std::cout << "--help          Display help information" << std::endl;
     std::cout << "--datadir       Specify path to a data directory, overriding the default. This option can be used "
                  "multiple times to add multiple directories. If no data directories are given, the default is to use "
-                 "(data/samples/mnist/, data/mnist/)"
+                 "(data/samples/MiDasV2/, data/MiDasV2/)"
               << std::endl;
     std::cout << "--useDLACore=N  Specify a DLA engine for layers that support DLA. Value can range from 0 to n-1, "
                  "where n is the number of DLA engines on the platform."
@@ -387,9 +377,9 @@
 
     sample::gLogger.reportTestStart(sampleTest);
 
-    SampleOnnxMNIST sample(initializeSampleParams(args));
+    SampleOnnxMiDasV2 sample(initializeSampleParams(args));
 
-    sample::gLogInfo << "Building and running a GPU inference engine for Onnx MNIST" << std::endl;
+    sample::gLogInfo << "Building and running a GPU inference engine for Onnx MiDasV2" << std::endl;
 
     if (!sample.build())
     {