I’m a newbie on GPU acceleration. Just tried a basic LWJGL binding on CUDA with a simple kernel, no shared memory, function signature is like follows
__global__ void compute(
unsigned int n,
unsigned long long int* timeMs,
double* a, double* b, double* c,
double *g_odata)
the kernel function is basically retrieving data for the thread id from above arrays (timeMs, a, b, c etc.) and do some math, and put result in g_odata array on appropriate thread id. n being the number of thread to compute (it checks if thread id overshoots n of course). There is no shared memory or reduction.
Now here’s the curious case about n (total thread size / parallelism) and block size when I measure the TOTAL time taken for the kernel to complete
(I have a GPU with 80 multi-processors)
Through clock64()
I added timestamping on the kernel function before and after, and collected the total time for each thread, and it’s apparent that the more threads there are, the slower they take for the SAME task
Now questions:
- Why is the kernel function taking more time for more threads? is the execution interlaced (i.e. scheduler can pause one of them before it completes and execute another)
- Why is there a plateau behaviour after a bump at 100 threads? and why it takes off again
- The varying performance based on block number. I read that grid/block is just developer perspective and has no bearing (esp. for my fully segregated threads with no sharing/reduction). So why does it matter, and how to pick the best block size?