Ask Your Question

czerhtan's profile - activity

2018-06-27 19:28:34 -0600 received badge  Popular Question (source)
2016-04-30 14:36:12 -0600 received badge  Enlightened (source)
2016-04-30 14:36:12 -0600 received badge  Good Answer (source)
2013-07-22 22:16:17 -0600 received badge  Nice Answer (source)
2013-06-26 15:18:12 -0600 received badge  Good Question (source)
2013-06-03 09:36:40 -0600 answered a question The difference between BruteForce-Hamming and BruteForce-HammingLUT

I cannot find the exact source where the two variations are implemented, but usually, the Hamming distance algorithm relies on two steps: an XOR operation between two bit vectors, and a population count on the resulting bit vector.

The XOR is pretty straight-forward on any type of architecture, but the popcount algorithm can sometimes be speeded up by using processor-specific instruction sets (such as SSE2/SSSE3/...). When these instruction sets are not available/supported, it is also possible to use a look-up table in an attempt to speed up the 'naive' algorithm.

If, in your case, the regular Hamming variation is faster, I would assume your OpenCV bins were built with SSE2/SSSE2/... support enabled, and the algorithm defaults to the accelerated version. On low-end processors, the LUT version is usually faster (although it always depends on the LUT size and the bit vector length).

Info on Hamming dist: http://en.wikipedia.org/wiki/Hamming_distance

Info on 'popcount' algorithms and accelerated implementations: http://wm.ite.pl/articles/sse-popcount.html

2013-06-03 09:30:18 -0600 received badge  Teacher (source)
2013-06-03 09:11:59 -0600 answered a question CvGetCaptureProperty returning -1 all the time

It might be the normal behavior that a camera feed does not provide timestamps for each frame; seeing how you're the one querying the frames at your own rate/will, you are probably supposed to generate those on your own.

2013-06-02 16:48:48 -0600 received badge  Nice Question (source)
2013-06-02 16:34:15 -0600 received badge  Editor (source)
2013-06-02 11:21:34 -0600 received badge  Student (source)
2013-06-02 11:09:15 -0600 asked a question cv::Vec<...,...> vs direct access performance for multi-channel matrices

I'm currently trying to reduce the overhead cost of accessing cv::Mat elements in semi-critical code; when manipulating CV_8UC1 (grayscale) images, I can directly access the element I want by using one of the following lines:

uchar val = img.at<uchar>(row,col);
    or
uchar val = img.data[img.step.p[0]*row + col];

So far, all is well, performance is good. These two lines are actually identical, as the .at<...> function is actually an inlined data access. The problem comes up when trying to access elements in multi-channel matrices: the following line, unlike what I assumed, crashes at run-time, since the matrix is still considered 2-dimensions.

uchar val = img.at<uchar>(row,col,cn)       (DOES NOT WORK)

Looking around for an 'official' solution revealed that using the following lines was the most common way to go:

const Vec3b vec = img.at<Vec3b>(row,col);
uchar val = vec[cn];
    or
uchar val = img.at<Vec3b>(row,col)[cn];

Thing is, going through the Vec<...,...> structure to access a single channel value is extremely time-consuming: in fact, some quick profiling showed that it was at least 10 times slower than a direct 'data' access.

Am I wrong in assuming this is the most common solution? The performance hit is actually quite important, and falling back to a manual data access (by guessing the underlying data structure) is actually a major improvement:

uchar val = img.data[img.step.p[0]*row + img.step.p[1]*col + cn];

Is there any reason why the Vec<...,...> approach does not offer a better performance, or why the .at<...> function doesn't support multi-channel access?