Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

cv::Mat and the generic cv::Ptr<> implement a classical refcount strategy, with atomic operations. They are regarded as very fast, especially because most processor implement in hardware all the operations needed for atomic increment/decrement. In most scenarios (if you do not run a 1000-threads app on a prototype CPU) the delay introduced by atomic operations can barely be measured.

About the safety, Mat & Ptr<> offer standard safety guarantees: They can be safely read or written in parallel (so multiple reads are safe, multiple writes are safe, but not mixed read/writes)

cv::Mat and the generic cv::Ptr<> implement a classical refcount strategy, with atomic operations. They are regarded as very fast, especially because most processor implement in hardware all the operations needed for atomic increment/decrement. In most scenarios (if you do not run a 1000-threads app on a prototype CPU) the delay introduced by atomic operations can barely be measured.

Explanation: an atomic memory operation guarantees that there no other atomic operation is working with a given memory address. So, when you issue an atomic_increment, by example, the CPU will check in a special hardware register that the given address is not blocked by another atomic operation. All this is very fast, and the overload for address checking is low. The only problem would be when you actually have to wait for another instruction to finalize, but an increment in a modern processor takes a few cycles, and a mem write a few hundred cycles. That is not a lot.

Plus, given the fact that you most probably create/copy matrices rarely in your OpenCV app (dozens per sec, probably), both the chance to wait, and the waiting time are dwarfed by other processing.

If you make hundreds/thousands of Mat operations per sec (like mat1 = mat2, etc), then it may be a problem with your algorithm logic

About the safety, safety, Mat & Ptr<> offer standard safety guarantees: They can be safely read or written in parallel (so multiple reads are safe, multiple writes are safe, but not mixed read/writes)

cv::Mat and the generic cv::Ptr<> implement a classical refcount strategy, with atomic operations. They are regarded as very fast, especially because most processor implement in hardware all the operations needed for atomic increment/decrement. In most scenarios (if you do not run a 1000-threads app on a prototype CPU) the delay introduced by atomic operations can barely be measured.

Explanation: an atomic memory operation guarantees that there no other atomic operation is working with a given memory address. So, when you issue an atomic_increment, by example, the CPU will check in a special hardware register that the given address is not blocked by another atomic operation. All this is very fast, and the overload for address checking is low. The only problem would be when you actually have to wait for another instruction to finalize, but an increment in a modern processor takes a few cycles, and a mem write a few hundred cycles. That is not a lot.

Plus, given the fact that you most probably create/copy matrices rarely in your OpenCV app (dozens per sec, probably), both the chance to wait, and the waiting time are dwarfed by other processing.

If you make hundreds/thousands of Mat operations per sec (like mat1 = mat2, etc), then it may be a problem with your algorithm logic

About the safety, Mat & Ptr<> offer standard safety guarantees: They can be safely read or written in parallel (so multiple reads are safe, multiple writes are safe, but not mixed read/writes)

EDIT

I've quickly checked the link you posted, and it seems that the author emphasizes the need to use refcounts over more heavyweight solutions (semaphores, critical sections, mutexes) - actually, refcounting is the fastest way to sync objects in a multithreaded environment