Despite .onnx
models having clear benefits in C++ environment, most guides have focused on Python.
OnnxContainer
is simple and flexible. Supported execution providers are CPU, CUDA and TensorRT.
main.cpp
contains example usage of Resnet152 model which you can download here
- IObinding for CUDA and TensorRT
- Support for input and output tensor size parsing
- Support for multiplle input/output nodes
- Further optimizations and speed benchmarks
- onnxruntime 1.14.1 or above, with GPU support
- CUDA 11.7 or newer
- CMake, gflags, OpenCV
- (optional) vcpkg
- Set onnxruntime installation directory as
-DONNXRUNTIME_ROOTDIR=[YOUR_ONNXRUNTIME_INSTALLATION]
This value is set asC:/Program Files/onnxruntime
if not explicitly set. - (optional) Set vcpkg as your cmake toolchain file as
-DCMAKE_TOOLCHAIN_FILE=[YOUR_VCPKG_INSTALLATION]/scripts/buildsystems/vcpkg.cmake
.
cmake -A x64 -B ./build -S . -DCMAKE_TOOLCHAIN_FILE=[YOUR_VCPKG_INSTALLATION]/scripts/buildsystems/vcpkg.cmake -DONNXRUNTIME_ROOTDIR=[YOUR_ONNXRUNTIME_INSTALLATION]
- Download onnx file from onnxruntime GitHub Page
Input
onnx_example.exe --image_file cat.jpg --provider CPU
Ouput
Rank 1 class: "Egyptian cat" with probability: 0.621877
Rank 2 class: "tabby, tabby cat" with probability: 0.324264
Rank 3 class: "tiger cat" with probability: 0.0519
Rank 4 class: "lynx, catamount" with probability: 0.000556309
Rank 5 class: "tiger, Panthera tigris" with probability: 6.72054e-05
- Download onnx file from PaddleDetection GitHub Page
Input
pp_yolo_e_example.exe --image_file ./super_shy.JPG --obj_thresh 0.4
Output