I was looking into an issue regarding cv::cuda::Stream
reported here. Some GPU memory is allocated via DefaultDeviceInitializer
when Stream::Null()
is called, and since the DefaultDeviceInitializer
object is globally defined, the GPU memory deallocator code may be executed after CUDA context is destroyed.
The question is : why do OpenCV implements Stream
objects with close relation to GPU memory? IMHO, isn't it rather common for some GPU allocated objects to be handled by different CUDA streams? If so, what is the advantage of this kind of implementation?