TensorRT inference with Triton Server Docker
I’m studying how to user the combination of tensorRT and triton. I’m working in this server: NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 Ubuntu 22.04 and I’ve installed TensorRT 10.0.1 with the following code