See this nsys profile:
I have observed that during the forward pass of some layers in TensorRT execution, a lock is acquired before launching the kernel.
I attempted to determine the specific lock being acquired and noticed that the lock is consistently released at the exact same time when an HtoD cudaMemcpyAsync
call returns, which is executed in another separate thread.
These layers, which exhibit this behavior, share a common characteristic: they all have “conv” in their names, but not all layers containing “conv” in their names display this phenomenon.
I’ve observed this phenomenon in more than one models, including yolo, which consists of a lot of conv layers.
So, a cudaMemcpyAsync
call seems to block the execution of a certain type of layer forwarding in TensorRT, even if the blocked forwarding does not actually require any data copying, but only a brief kernel execution, and i can’t figure out what happens.
I am attempting to overlap the copy operation in one thread with the forwarding execution in another thread, allowing them to execute concurrently, in order to optimize the utilization of the copy bandwidth. So this blocking behavior is quite frustrating.
As I am new to CUDA, I have not yet learned about any mechanisms that would cause the issuing of a kernel to be blocked by an unrelated copy operation from another thread in certain cases.
This is quite counter-intuitive for me, because i thought memcpy and kernel execution uses different resources, thus they should be allowed to run concurrently.
What lock is the forwarding step likely acquiring, and why does it acquire it?
user23864711 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.