I’m studying how to user the combination of tensorRT and triton. I’m working in this server: NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 Ubuntu 22.04 and I’ve installed TensorRT 10.0.1 with the following code
!wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/local_repos/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0_1.0-1_amd64.deb
!echo "✅ DOWNLOADED"
!sudo dpkg -i nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0_1.0-1_amd64.deb
!sudo cp /var/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0/nv-tensorrt-local-9A1EDFBA-keyring.gpg /usr/share/keyrings/
!echo "✅ INSTALLED"
!sudo apt-get update
!sudo apt-get install tensorrt -y #PROBLEM!, it still install the 10.0.1 version
!sudo pip install tensorrt
!sudo apt-get install python3-libnvinfer-dev -y
!sudo pip install protobuf
!sudo apt-get install uff-converter-tf -y
!sudo pip install numpy onnx
!sudo apt-get install onnx-graphsurgeon -y
!echo "✅ DONE"
!dpkg -l | grep TensorRT
!cd /usr/src/tensorrt/samples/trtexec && make CUDA_INSTALL_DIR=/usr/local/cuda/ CUDNN_INSTALL_DIR=/usr/local/cuda/ TRT_LIB_DIR=/usr/src/tensorrt/bin
!sudo cp /usr/src/tensorrt/bin/trtexec /usr/local/bin/
!echo "✅ DONE"
and here I would have the first question because I still get the version 10.0.1, but however I’m able to export and run my models. I export a .plan and create the model repository and this config.pbtxt
name: "model"
platform: "tensorrt_plan"
max_batch_size : 0
input [
{
name: "keras_tensor_177"
data_type: TYPE_FP32
dims: [1, 224, 224, 3]
}
]
output [
{
name: "output_0"
data_type: TYPE_FP32
dims: [-1, 1000]
}
]
but when I run my triton server docker with docker run –gpus=all -ti –rm -p 8000:8000 -p 8001:8001 -p 8002:8002 -v /home/ubuntu/model_repository:/models nvcr.io/nvidia/tritonserver:24.04-py3 tritonserver –model-repository=/models I get the following error:
NVIDIA Release 24.04 (build 90085237)
Triton Server Version 2.45.0
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
NOTE: CUDA Forward Compatibility mode ENABLED.
Using CUDA 12.4 driver version 550.54.15 with kernel driver version 535.161.08.
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
I0506 20:17:29.745006 1 pinned_memory_manager.cc:275] Pinned memory pool is created at '0x7fa18c000000' with size 268435456
I0506 20:17:29.747106 1 cuda_memory_manager.cc:107] CUDA memory pool is created on device 0 with size 67108864
I0506 20:17:29.751690 1 model_lifecycle.cc:469] loading: model:1
I0506 20:17:29.813944 1 tensorrt.cc:65] TRITONBACKEND_Initialize: tensorrt
I0506 20:17:29.813982 1 tensorrt.cc:75] Triton TRITONBACKEND API version: 1.19
I0506 20:17:29.813989 1 tensorrt.cc:81] 'tensorrt' TRITONBACKEND API version: 1.19
I0506 20:17:29.814004 1 tensorrt.cc:105] backend configuration:
{"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}}
I0506 20:17:29.816206 1 tensorrt.cc:231] TRITONBACKEND_ModelInitialize: model (version 1)
I0506 20:17:29.990121 1 logging.cc:46] Loaded engine size: 100 MiB
E0506 20:17:30.026450 1 logging.cc:40] 1: [stdArchiveReader.cpp::StdArchiveReaderInitCommon::46] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 236, Serialized Engine Version: 237)
I0506 20:17:30.035947 1 tensorrt.cc:274] TRITONBACKEND_ModelFinalize: delete model state
E0506 20:17:30.035985 1 model_lifecycle.cc:638] failed to load 'model' version 1: Internal: unable to load plan file to auto complete config: /models/model/1/model.plan
I0506 20:17:30.036001 1 model_lifecycle.cc:773] failed to load 'model'
I0506 20:17:30.036090 1 server.cc:607]
I cannot find anything useful and updated online, so I tried to ask here if someone expert can help me. I can run in triton any other type of model, onnx or TensorFlow, I have problems only with the TensorRT ones…
Any idea? Someone can share the steps to set up their environment and the triton docker if I need to build it again?