# Any plan for constant time computing ?

Appart from cv::Integral to compute sums in constant time do you have plans for other computations in constant time like weighted sums or gradient direction like what www.sigoone.com proposes ? ?

edit retag close merge delete

Sort by ยป oldest newest most voted

I had not heard of them, but I am certainly interested. It looks like they plan to charge, so we couldn't integrate it directly into OpenCV.

If you know of papers though, those are definitely operations I'm interested in speeding up. I can certainly spend some time working on OpenCV versions.

I admit to being curious just what preprocessing they are doing and how it compares to a reasonably optimized implementation. I'll have to check out the demo once I have time.

EDIT: Found a paper that describes the constant time histogram. Constant Time O(1) Contextual and Variational Contrast Enhancement with Integral Histogram by Yu-Wen Tsai, Fan-Chieh Cheng, and Shanq-Jang Ruan.

Got the O(1) Hist running in about twice the time of the sigone library in a couple minutes. So room for improvement. It's pretty clunky though. Takes a LOT of memory.

vector<Mat> S;
int bins = 16;
int step = 256/bins;
for(int i = 0; i<256; i+=step)
{
S.push_back( Mat(im.rows, im.cols, CV_32S) );
int b = i*step;
int t = (i+1)*step;
{
int* tp = S[i/bins].ptr<int>( 0 );
uchar* ip = im.ptr( 0 );
if(ip[0] >= b && ip[0] < t)
tp[0] = 1;
else
tp[0] = 0;
for(int c = 1; c<im.cols; ++c)
{
tp[c] = tp[c-1];
if(ip[c] >= b && ip[c] < t)
tp[c]++;
}
}
for(int r = 1; r<im.rows; ++r)
{
int* tpp = S[i/bins].ptr<int>( r-1 );
int* tp = S[i/bins].ptr<int>( r );
uchar* ip = im.ptr( r );
if(ip[0] >= b && ip[0] < t)
tp[0] = tpp[0]+1;
else
tp[0] = tpp[0];
for(int c = 1; c<im.cols; ++c)
{
tp[c] = tpp[c]+tp[c-1];
if(ip[c] >= b && ip[c] < t)
tp[c]++;
}
}
}

more

Maybe you can find something useful in the previous papers of the founder or in his PhD thesis?

He holds also some patents on active stereo vision.

Nothing directly related to constant time computing at first sight?

( 2017-10-26 04:54:49 -0500 )edit

Good job ! It looks simply like an integral image computation per histogram bin. Indeed this approach requires a lot of memory : 32 bits * 16 bins for each pixel (and that is for 16 bins only, imagine for 256...) I guess sigoone has low memory footprint versions for those kind of situations, with their histogram demo 10 times lower memory footprint can be reached using some kind of compression. They claim to have made preprocess speed improvement available with their next release, once available, we will see how it compares to this solution.

( 2017-11-30 17:46:48 -0500 )edit
1

Yeah, it's not pretty. If someone has a paper on a more effective way to do it, or just figures it out, let me know and I'll work in implementing it.

( 2017-11-30 18:30:46 -0500 )edit

Official site

GitHub

Wiki

Documentation