I’m struggling to get a very simple C++ Cuda application to run inside a docker container.
- The code:
#include <stdio.h>
__global__ void helloCUDA()
{
printf("Hello CUDA from a CUDA Kernel!n");
}
int main()
{
helloCUDA<<<1, 1>>>();
cudaDeviceSynchronize();
return 0;
}
- The base image:
nvidia/cuda:12.2.2-devel-ubuntu22.04
- I run the container as follows:
docker run –gpus all -d –name container -it –rm mycudaimage
- Inside the container, I can see my GPU when I run nvidia-smi:
+—————————————–
| NVIDIA-SMI 550.99
Driver Version: 552.74
CUDA Version: 12.4
|—————————————–+
| GPU Name Persistence-M |
| Fan Temp Perf Pwr:Usage/Cap |
| |
|=========================================+
| 0 Quadro P3200 On |
| N/A 60C P5 9W / 88W |
| |
+—————————————–+
- I compile my code using nvcc:
nvcc hello_world.cu
And run the ./a.out, but nothing happens. I have already tried to print statements outside the Cuda kernel, and it works fine.
Any suggestions?
Thank you,
Rafael.