I deployed a python script that uses ProcessPoolExecutor, max_workers = 5 on a 96 CORE AMD Threadripper CPU and 2 NVIDIA RTX 4090 GPUs. The script processes 500 very large images (8192*8192 RGB), segments objects and classifies them. I can get it to run for 10-100 images, but it almost always crashes without reason beyond the 100-image mark. Sometimes it threw segmentation fault, core-dumped in the past but following some improvements it doesn’t do that anymore.
I have enough memory 1 TB RAM, GPU is not saturated, I have 50% GPU memory usage including spikes monitored through nvidia-smi. My debug logs don’t record anything, I have tried valgrind that records nothing out of the ordinary as well.
To top it all my 500 image script works on AWS g6.12xlarge AMD EPYC 7R13, 48 cores and 200 GB RAM, NVIDI L4 GPUS 24GB each, which is far less intense than my current setup. We run the same ubuntu OS version, slightly different NVIDIA drivers (but should this matter given that the script runs for small batches on my machine).
My leading theories are:
- Thermal throttling
- Consumer grade setups are different from AWS
- Faulty equipment
Am I missing something? Has anyone encountered this issue?
I have tried all sorts of exception handling, various debugging logs, valgrind, switching between AWS instances, running my script in a bash script passing 10 images at a time in a loop each time creating a new bash shell i.e. for (1..50) bash -c <script.py>. Every single time I crash and no useful debug logs.
Aditya Iyer is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.