Ask Your Question

lovaj's profile - activity

2020-10-30 07:14:23 -0600 received badge  Popular Question (source)
2020-10-07 00:09:19 -0600 received badge  Popular Question (source)
2020-04-22 02:37:01 -0600 received badge  Popular Question (source)
2019-06-17 17:55:43 -0600 received badge  Famous Question (source)
2018-09-11 12:16:09 -0600 received badge  Notable Question (source)
2018-05-08 01:39:39 -0600 received badge  Popular Question (source)
2017-11-09 05:44:12 -0600 marked best answer Locality Sensitivy Hashing in OpenCV for image processing

This is my first image processing application, so please be kind with this filthy peasant.

THE APPLICATION:

I want to implement a fast application (performance are crucial even over accuracy) where given a photo (taken by mobile phone) containing a movie poster finds the most similar photo in a given dataset and return a similarity score. The dataset is composed by similar pictures (taken by mobile phone, containing a movie poster). The images can be of different size, resolutions and can be taken from different viewpoints (but there is no rotation, since the posters are supposed to always be right-oriented).

Any suggestion on how to implement such an application is well accepted.

FEATURE DESCRIPTIONS IN OPENCV:

I've never used OpenCV and I've read this tutorial about Feature Detection and Description by OpenCV.

From what I've understood, these algorithms are supposed to find keypoints (usually corners) and eventually define descriptors (which describe each keypoint and are used for matching two different images). I used "eventually" since some of them (eg FAST) provides only keypoints.

MOST SIMILAR IMAGE PROBLEM AND LSH:

The problems above doesn't solve the problem "given an image, how to find the most similar one in a dataset in a fast way". In order to do that, we can both use the keypoints and descriptors obtained by any of the previous algorithms. The problem stated above seems like a nearest neighbor problem and Locality Sensitive Hashing is a fast and popular solution for find an approximate solution for this problem in high-dimensionality spaces.

THE QUESTION:

What I don't understand is how to use the result of any of the previous algorithms (i.e. keypoints and descriptors) in LSH.

Is there any implementation that treat this problem?

2017-05-27 08:52:27 -0600 asked a question Building OpenCV with AVX-2 (or AVX-512 support)?

I want to build OpenCV 3.2 for AVX-2 support. By using -DENABLE_AVX2=ON and using gcc I get these compiler flags:

-msse -msse2 -mno-avx -mavx2

But following this I should use -DCPU_BASELINE=AVX2, but then I get these compiler flags:

-msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2

What is the correct way to do this?

In addition, I have a AVX-512 compatible machine (Intel KNL) and I would obtain even better performances if I could build it using these set of instructions. I've seen here some trace of 512 stuff, but I can't understand how could I use it.

2017-05-21 10:32:59 -0600 commented answer How to write this interpolate function using OpenCV?

Thanks @Tetragramm I think you did the best you could, I really appreaciate it and your help was crucial in my project! Chapeau sir! :)

2017-05-21 04:25:54 -0600 commented answer How to write this interpolate function using OpenCV?

@Tetragramm thanks for your answer, you are the one who went closer to an actual solution between all the people that I've talked to! However (as you can see in my updated answer), the result is kinda approximated, i.e. some values are different...For example, you can see how in img some values are different from myImg obtained through your approach. I don't know if there is a solution for this. Btw, I would need the boolean value, could you please explain a little better the last part?

2017-05-20 09:45:24 -0600 asked a question How to write this interpolate function using OpenCV?

I want to optimize this code, in particular this function:

bool interpolate(const Mat &im, float ofsx, float ofsy, float a11, float a12, float a21, float a22, Mat &res)
{         
   bool ret = false;
   // input size (-1 for the safe bilinear interpolation)
   const int width = im.cols-1;
   const int height = im.rows-1;
   // output size
   const int halfWidth  = res.cols >> 1;
   const int halfHeight = res.rows >> 1;
   int dim = res.rows * res.cols;
   float *out = res.ptr<float>(0);
   for (int j=-halfHeight; j<=halfHeight; ++j)
   {
      const float rx = ofsx + j * a12;
      const float ry = ofsy + j * a22;
      #pragma omp simd
      for(int i=-halfWidth; i<=halfWidth; ++i)
      {
         float wx = rx + i * a11;
         float wy = ry + i * a21;
         const int x = (int) floor(wx);
         const int y = (int) floor(wy);
         if (x >= 0 && y >= 0 && x < width && y < height)
         {
            // compute weights
            wx -= x; wy -= y;
            // bilinear interpolation
            *out++ = 
               (1.0f - wy) * ((1.0f - wx) * im.at<float>(y,x)   + wx * im.at<float>(y,x+1)) +
               (       wy) * ((1.0f - wx) * im.at<float>(y+1,x) + wx * im.at<float>(y+1,x+1));
         } else {
            *out++ = 0;
            ret =  true; // touching boundary of the input            
         }
      }
   }
   return ret;
}

Can anybody please help to find an equivalent function in OpenCV for the code above? I'm not expert on image processing, so I don't really know what the fucntion above does in details, but I think this is a warp-affine transformation, even though I don't really know how to define an OpenCV equivalent.

It seems that the input images are the blurred image, the original image or small patches of it. The output result is always a small patch.

Here are some samples:

Sample 1:

Args:

ofx=175.497 ofsy=315.06 a11=1.69477 a12=0.0671724 a21=0.0679493 a22=1.56309

Input image:

enter image description here

Output image:

enter image description here

Sample 2:

Args:

ofx=572.121 ofsy=326.659 a11=0.871508 a12=0 a21=0.346405 a22=1.14744

Input image:

enter image description here

Output image:

enter image description here

Sample 3:

ofx=66.571 ofsy=148.991 a11=1.12027 a12=0.126609 a21=0.126609 a22=2.53436

Input image:

enter image description here

Output image:

enter image description here

These are different sections of the original code where the function is called:

  // warp input according to current shape matrix
  interpolate(wrapper.prevBlur, lx, ly, u11*ratio, u12*ratio, u21*ratio, u22*ratio, img);

  Mat smoothed(patchImageSize, patchImageSize, CV_32FC1, (void *)&workspace.front());
  // interpolate with det == 1
  if (!interpolate(img, x, y, a11, a12, a21, a22, smoothed))
  {

  // subsample with corresponding scale
  interpolate(smoothed, (float)(patchImageSize>>1), (float)(patchImageSize>>1), imageToPatchScale, 0, 0, imageToPatchScale, patch);


  // ok, do the interpolation
  interpolate(img, x, y, a11, a12, a21, a22, patch);

This is what happens when I try to use warpAffine in the following way:

  // warp input according to current shape matrix
  interpolate(wrapper.prevBlur, lx, ly, u11*ratio, u12*ratio, u21*ratio, u22*ratio, img);

  Mat warp_mat( 2, 3, CV_32FC1 );
  Mat myPatch(img.rows, img.cols, CV_32FC1);
  warp_mat.at<float>(0,2) = lx;
  warp_mat.at<float>(1,2) = ly;
  warp_mat.at<float ...
(more)
2017-05-20 04:37:33 -0600 commented answer Is there any OpenCV or IPP equivalent for this function?

@Tetragram I've opened this question about the input and output images that you asked, please help!

2017-05-20 04:36:41 -0600 asked a question How to write this interpolate function using OpenCV?

I want to optimize this code, in particular this function:

bool interpolate(const Mat &im, float ofsx, float ofsy, float a11, float a12, float a21, float a22, Mat &res)
{         
   bool ret = false;
   // input size (-1 for the safe bilinear interpolation)
   const int width = im.cols-1;
   const int height = im.rows-1;
   // output size
   const int halfWidth  = res.cols >> 1;
   const int halfHeight = res.rows >> 1;
   int dim = res.rows * res.cols;
   float *out = res.ptr<float>(0);
   for (int j=-halfHeight; j<=halfHeight; ++j)
   {
      const float rx = ofsx + j * a12;
      const float ry = ofsy + j * a22;
      #pragma omp simd
      for(int i=-halfWidth; i<=halfWidth; ++i)
      {
         float wx = rx + i * a11;
         float wy = ry + i * a21;
         const int x = (int) floor(wx);
         const int y = (int) floor(wy);
         if (x >= 0 && y >= 0 && x < width && y < height)
         {
            // compute weights
            wx -= x; wy -= y;
            // bilinear interpolation
            *out++ = 
               (1.0f - wy) * ((1.0f - wx) * im.at<float>(y,x)   + wx * im.at<float>(y,x+1)) +
               (       wy) * ((1.0f - wx) * im.at<float>(y+1,x) + wx * im.at<float>(y+1,x+1));
         } else {
            *out++ = 0;
            ret =  true; // touching boundary of the input            
         }
      }
   }
   return ret;
}

Can anybody please help to find an equivalent function in OpenCV for the code above? I'm not expert on image processing, so I don't really know what the fucntion above does in details, but I think this is a warp-affine transformation, even though I don't really know how to define an OpenCV equivalent.

It seems that the input images are the blurred image, the original image or small patches of it. The output result is always a small patch.

Here are some samples:

Sample 1:

Args:

ofx=175.497 ofsy=315.06 a11=1.69477 a12=0.0671724 a21=0.0679493 a22=1.56309

Input image:

enter image description here

Output image:

enter image description here

Sample 2:

Args:

ofx=572.121 ofsy=326.659 a11=0.871508 a12=0 a21=0.346405 a22=1.14744

Input image:

enter image description here

Output image:

enter image description here

Sample 3:

ofx=66.571 ofsy=148.991 a11=1.12027 a12=0.126609 a21=0.126609 a22=2.53436

Input image:

enter image description here

Output image:

enter image description here

These are different sections of the original code where the function is called:

  // warp input according to current shape matrix
  interpolate(wrapper.prevBlur, lx, ly, u11*ratio, u12*ratio, u21*ratio, u22*ratio, img);

  Mat smoothed(patchImageSize, patchImageSize, CV_32FC1, (void *)&workspace.front());
  // interpolate with det == 1
  if (!interpolate(img, x, y, a11, a12, a21, a22, smoothed))
  {

  // subsample with corresponding scale
  interpolate(smoothed, (float)(patchImageSize>>1), (float)(patchImageSize>>1), imageToPatchScale, 0, 0, imageToPatchScale, patch);


  // ok, do the interpolation
  interpolate(img, x, y, a11, a12, a21, a22, patch);
2017-05-06 08:18:43 -0600 commented answer Is there any OpenCV or IPP equivalent for this function?

@Tetragramm I'll post the images very soon as you asked btw ;)

2017-05-06 08:17:51 -0600 commented answer Is there any OpenCV or IPP equivalent for this function?

@Tetragramm thanks again for your help. I'll try to do In this question you can see a little more details.

2017-05-04 03:34:36 -0600 commented answer Is there any OpenCV or IPP equivalent for this function?

@Tetragramm thanks for you help. I kinda understand the process, but I'm a little bit confused by how to use the code...and you know, just a little mistake and you get 0 results in image processing! Could you PLEASE help me writing the correct code?

2017-05-03 08:48:38 -0600 commented question Is there any OpenCV or IPP equivalent for this function?

@pi-null-mezon I'm sorry, it was already optimized on that point, I updated the question. Can you help me in some other way? I'm seriously stucked on this, using an equivalent function could help me so much

2017-05-03 07:20:57 -0600 asked a question Is there any OpenCV or IPP equivalent for this function?

I have this interpolate function taken from this code which I want to optimize:

bool interpolate(const Mat &im, float ofsx, float ofsy, float a11, float a12, float a21, float a22, Mat &res)
{         
   bool ret = false;
   // input size (-1 for the safe bilinear interpolation)
   const int width = im.cols-1;
   const int height = im.rows-1;
   // output size
   const int halfWidth  = res.cols >> 1;
   const int halfHeight = res.rows >> 1;
   int dim = res.rows * res.cols;
   float *out = res.ptr<float>(0);
   const float *imptr  = im.ptr<float>(0);
   for (int j=-halfHeight; j<=halfHeight; ++j)
   {
      const float rx = ofsx + j * a12;
      const float ry = ofsy + j * a22;
      #pragma omp simd
      for(int i=-halfWidth; i<=halfWidth; ++i)
      {
         float wx = rx + i * a11;
         float wy = ry + i * a21;
         const int x = (int) floor(wx);
         const int y = (int) floor(wy);
         if (x >= 0 && y >= 0 && x < width && y < height)
         {
            // compute weights
            wx -= x; wy -= y;
            // bilinear interpolation
            *out++ = 
            (1.0f - wy) * ((1.0f - wx) * imptr[rowOffset+x]   + wx * imptr[rowOffset+x+1]) +
            (       wy) * ((1.0f - wx) * imptr[rowOffset1+x] + wx * imptr[rowOffset1+x+1]);
        } else {
            *out++ = 0;
            ret =  true; // touching boundary of the input            
         }
      }
   }
   return ret;
}

I'm not the image processing guy and I'm really struggling to perform the equivalent using the opencv implementation. This is crucial for my project since the function above is incredibly time consuming.

Please help.

2017-05-02 07:42:27 -0600 asked a question Building OpenCV 3.2 with MKL and simd optimization

Disclamer: I'm not a cmake expert, I used it only to build OpenCV.

Reading this it seems that we need to specify that we want to build OpenCV with MKL and simd support.

So do I have to add the following flags?

-DENABLE_AVX2=ON -DHAVE_MKL=ON -DMKL_WITH_TBB=ON

Digging a little bit on opencv repository in this file it seems we need only to enable ipp (with -DWITH_IPP=ON and optionally specify -DWITH_IPP if we want to provide our version of IPP).

Am I correct or I'm missing something?

2017-05-02 04:06:20 -0600 commented answer Convert cv::Mat to std::vector without copying

But this is going to copy element by element, right? It's not just copying the references, right? In other words: the cost is linear, right?

2017-05-01 16:37:55 -0600 commented answer Is this code correct to allocate two aligned cv::Mat?

@matman if you would give a look at this question I would really appreciate it

2017-05-01 16:37:28 -0600 commented answer Is this code correct to allocate two aligned cv::Mat?

@matman speaking of which, how do I build OpenCV with AVX support?

2017-05-01 16:36:03 -0600 asked a question Convert cv::Mat to std::vector without copying

I've this function:

void foo(cv::Mat &mat){
  float *p = mat.ptr<float>(0);
  //modify mat values through p
}

And I have this code that calls the function:

void bar(std::vector<unsigned char> &vec){
  //...
  cv::Mat res(m, n, CV_32FC1, (void *)&workspace.front());
}

However, the code above has a performance problem: vec isn't probably aligned. In fact, the Intel compiler says that reference *out has unaligned access. I hope that this is the reason.

Btw, as I found out, cv::Mat is going to be aligned. So a simple workaround would be to:

  1. Create cv::Mat res(m,n)
  2. Call foo(m);
  3. Assign the vec pointer to m.ptr<float>(0)

As you can imagine, performance here are crucial, so deep-copies are out of questions.

I tried the code in this question:

vec = res.data;

But I get a compiler error. Besides, I don't know if this code above is going to do an (inefficient) copy, or just change the pointed data by vec.

How can I do this smoothly? Otherwise, another workaround would be appreciated.

2017-05-01 15:10:19 -0600 asked a question Is this code correct to allocate two aligned cv::Mat?

Disclamer: I am a SIMD newbie, but I have a good knowledge of OpenCV.

Let's suppose I have basic OpenCV code.

cv::Mat1f a = cv::Mat(n, n);
cv::Mat1f b = cv::Mat(n, n);
cv::Mat1f x;
//fill a and b somehow
x = a+b;

Now, let's suppose I want to use the code above on AVX2 or even AVX-512 machine. Having the data 32-aligned could be a great benefit for performances. The data above should not be aligned. I'm almost certain of it because in the optimization report files generated by the Intel Compiler it's said that the data is unaligned, but I could be wrong.

So what if I allocate 32-aligned pointers and use them for the operations above? Something like:

float *apt = __mm_alloc(sizeof(float)*m*n, 32);
float *bpt = __mm_alloc(sizeof(float)*m*n, 32);
float *xpt = __mm_alloc(sizeof(float)*m*n, 32);
cv::Mat1f a = cv::Mat(n, m, apt);
cv::Mat1f b = cv::Mat(m, n, bpt);
cv::Mat1f x = cv::Mat(n, n, xpt);
x = a+b;

I see three problems that concerns me in the code above:

  1. Matrix multiplications in cv::Mat returns a new cv::Mat object which, unfortunately, would be not aligned.
  2. Am I reinventing the wheel? I don't know if OpenCV includes already this kind of optimizations, even though I didn't find nothing useful on Google.
  3. Reading a little bit on Intel Intrinsics, when we use __mm_malloc, we should use __mm_free. However, cv::Mat are "kinda" of smart pointers, so they're allocated and de-allocated behind-the-hood, where probably some free or delete function is called.

To overcome problem 1, in very critical sections of code where data alignment could improve the performance, I could get the pointer back and do the sum operation using pointers. Something like:

//do the code above
float *aptr = a.ptr<float>(0);
float *bptr = b.ptr<float>(0);
float *xptr = c.ptr<float>(0);
for(int i=0; i<n ; i++)
  for(int j=0; j<m; j++)
    xptr[i*m+j] = a[i*m+j] + b[i*m+j];

But again, I'm afraid that I'm reinventing the wheel. Notice that this is a toy example to make it MCV, my actual code is much more complicated than this and compiler optimizations may not be obvious.

I compile my code with icpc and the following flags:

INTEL_OPT=-O3 -ipo -simd -xCORE-AVX2 -parallel -qopenmp -fargument-noalias -ansi-alias -no-prec-div -fp-model fast=2 -fma -align -finline-functions
INTEL_PROFILE=-g -qopt-report=5 -Bdynamic -shared-intel -debug inline-debug-info -qopenmp-link dynamic -parallel-source-info=2 -ldl
2017-04-30 04:57:05 -0600 commented question how to rewrite cv::Mat::at in a pointer way?

@LBerger daaaaaamn in the whole project width=im.cols, this was an hell of a joke! Thanks so much for noticing it!

2017-04-30 04:16:19 -0600 asked a question how to rewrite cv::Mat::at in a pointer way?

I have this function:

bool interpolate(const Mat &im, float ofsx, float ofsy, float a11, float a12, float a21, float a22, Mat &res)
{         
   // input size (-1 for the safe bilinear interpolation)
   const int width = im.cols-1;
   const int height = im.rows-1;
   // output size
   const int halfWidth  = res.cols >> 1;
   const int halfHeight = res.rows >> 1;
   float *out = res.ptr<float>(0);
   const float *imptr  = im.ptr<float>(0);
   const float *resptr = im.ptr<float>(0);
   for (int j=-halfHeight; j<=halfHeight; ++j)
   {
      //...
      for(int i=-halfWidth; i<=halfWidth; ++i, out++)
      {
         const int x = (int) //something;
         const int y = (int) //something;
         if (x >= 0 && y >= 0 && x < width && y < height)
         {
            std::cout<<"im(y,x)="<<im.at<float>(y,x)<<" imptr[y*width+x]="<<imptr[y*width+x]<<std::endl;    
         }
      }
   }
   return ret;
}

Where I want to use const float *imptr instead of const cv::Mat img for better efficiency. However, I can't understand why, but imptr[y*width+x]!=im.at<float>(y,x). Why?

2017-04-29 06:53:48 -0600 asked a question How to check which GaussianBlur version is used in OpenCV?

I built OpenCV using IPP, but I want to be sure that cv::GuassianBlur is executed by using that version and not other implementations.

I'm having this doubt because I've built OpenCV with OpenCL turned on, and I see on the stack call a lot using the libnvidia-opencl, while I never use OpenCL/CUDA/whatever-GPU-stuff in my code.

I'll rebuild OpenCV with -DWITH_OPENCL=OFF, but I want to be sure somehow that I use the IPP implementation. This is because I've installed the latest version of IPP and I can use very performant Intel CPU.

2017-04-24 16:25:04 -0600 asked a question Why these two gaussian blur sequences are so different?

I'm trying to optimize this code, and in particular I'm trying to optimize how the sequence of gaussian blurs are computed.

I've rewritten the code in this way (notice that most of the parameters and convention have been defined by the original code author):

   cv::Mat octaveLayer =   //input image
   int numberOfScales = 3; //user parameter
   int levels =            //user parameter 
   int scaleCycles = numberOfScales+2;
   float sigmaStep = pow(2.0f, 1.0f / (float) numberOfScales);
   std::vector<float> sigmas;
   for(int i=2; i<numberOfScales+2; i++){
         sigmas.push_back(sigmas[i-1]*sigmaStep);
   }
   vector<Mat> blurs (scaleCycles*levels+1, Mat());
   for(int i=0; i<levels; i++){
       blurs[i*scaleCycles+1] = octaveLayer.clone();
       for (int j = 1; j < scaleCycles; j++){
           float sigma = par.sigmas[j]* sqrt(sigmaStep * sigmaStep - 1.0f);
           blurs[j+1+i*scaleCycles] = gaussianBlur(blurs[j+i*scaleCycles], sigma);
           if(j == par.numberOfScales){
               octaveLayer = halfImage(blurs[j+1+i*scaleCycles]);
           }
       }
   }

Where:

Mat gaussianBlur(const Mat input, const float sigma)
{
   Mat ret(input.rows, input.cols, input.type());
   int size = (int)(2.0 * 3.0 * sigma + 1.0); if (size % 2 == 0) size++;      
   GaussianBlur(input, ret, Size(size, size), sigma, sigma, BORDER_REPLICATE);
   return ret;
}

Mat halfImage(const Mat &input)
{
   auto start = startTimerHesaff();
   Mat n(input.rows/2, input.cols/2, input.type());
   float *out = n.ptr<float>(0);
   for (int r = 0, ri = 0; r < n.rows; r++, ri += 2)
      for (int c = 0, ci = 0; c < n.cols; c++, ci += 2)
         *out++ = input.at<float>(ri,ci);
   return n;
}

Images ar in 0-255 range values, and this is the code used to read them:

  Mat tmp = imread(argv[1]);
  Mat image(tmp.rows, tmp.cols, CV_32FC1, Scalar(0));

  float *out = image.ptr<float>(0);
  unsigned char *in  = tmp.ptr<unsigned char>(0); 

  for (size_t i=tmp.rows*tmp.cols; i > 0; i--)
  {
     *out = (float(in[0]) + in[1] + in[2])/3.0f;
     out++;
     in+=3;
  }

I think these are all the information that you need to understand the code above, please let me know otherwise.

The code above, which is correct, generates 1760 keypoints.

I actually don't know many details of the code above, for example how different sigma are computed or sigmaSte value etc, I'm using the original code for that.

Now, what I want to do is compute each blur independently, so it can be parallelized. According to Wikipeida, this can be done with:

Applying multiple, successive gaussian blurs to an image has the same effect as applying a single, larger gaussian blur, whose radius is the square root of the sum of the squares of the blur radii that were actually applied. For example, applying successive gaussian blurs with radii of 6 and 8 gives the same results as applying a single gaussian blur of radius 10, since {\displaystyle {\sqrt {6^{2}+8^{2}}}=10} {\sqrt {6^{2}+8^{2}}}=10. Because of this relationship, processing time cannot be saved by simulating a gaussian blur with successive, smaller blurs — the time required will be at ...

(more)
2017-04-23 12:40:12 -0600 commented answer Weird numbers with cv::Mat

@berak that was what I was thinking about!

2017-04-23 11:52:12 -0600 commented answer Weird numbers with cv::Mat

@berak is it correct to say that this is going to be very inefficient in terms of performance because of the deep copy? This is for an HPC application, I need to do this efficiently as possible

2017-04-23 11:27:08 -0600 asked a question Weird numbers with cv::Mat

I have this struct:

struct Result{
    Result(const cv::Mat1f &descriptor, const Keypoint &keypoint) :
        descriptor(descriptor), keypoint(keypoint){
      std::cout<<"constructor="<<std::endl<<descriptor<<std::endl;
    }
    const cv::Mat1f descriptor;
    const Keypoint keypoint;
};

Where Keypoint I think is irrelevant for this question (but I'll post it if you ask about it).

I have this function:

void foo(...,vector<Result> &res){
...
std::vector<float> vec;
//fill vec
cv::Mat1f desc(1,128,vec.data());
std::cout<<"desc="<<std::endl<<desc<<std::endl;
res.push_back(Result(desc, Keypoint(...)));
}

Which is called like this:

std::vector<Result> res;
for(int i=0; i<N; i++){
   foo(..., res);
   std::cout<<"res="<<std::endl<<res.back().descriptor<<std::endl;
}

Sometimes, the three std::cout print the same value. But in some cases, the last print (the one inside after calling foo) gives different results:

desc=
[2, 10, 54, 3, 2, 2, 5, 6, 7, 27, 111, 5, 3, 17, 11, 12, 8, 31, 113, 6, 3, 6, 10, 27, 2, 8, 63, 9, 13, 17, 2, 1, 54, 10, 16, 5, 13, 22, 18, 85, 37, 18, 17, 3, 99, 113, 29, 41, 113, 7, 21, 5, 28, 40, 34, 113, 12, 6, 14, 10, 93, 113, 8, 23, 54, 21, 50, 31, 20, 20, 7, 78, 18, 69, 82, 29, 86, 108, 9, 20, 77, 44, 98, 54, 17, 20, 15, 113, 27, 44, 29, 14, 53, 74, 10, 31, 8, 16, 36, 45, 39, 8, 4, 1, 35, 105, 45, 16, 11, 5, 4, 3, 6, 28, 53, 92, 32, 5, 5, 2, 27, 61, 23, 14, 9, 4, 6, 4]
constrcutor=
[2, 10, 54, 3, 2, 2, 5, 6, 7, 27, 111, 5, 3, 17, 11, 12, 8, 31, 113, 6, 3, 6, 10, 27, 2, 8, 63, 9, 13, 17, 2, 1, 54, 10, 16, 5, 13, 22, 18, 85, 37, 18, 17, 3, 99, 113, 29, 41, 113, 7, 21, 5, 28, 40, 34, 113, 12, 6, 14, 10, 93, 113, 8, 23, 54, 21, 50, 31, 20, 20, 7, 78, 18, 69, 82, 29, 86, 108, 9, 20, 77, 44, 98, 54, 17, 20, 15, 113, 27, 44, 29, 14, 53, 74, 10, 31, 8, 16, 36, 45, 39, 8, 4, 1, 35, 105, 45, 16, 11, 5, 4, 3, 6, 28, 53, 92, 32, 5, 5, 2, 27, 61, 23, 14, 9, 4, 6, 4]
res=
[1.1986551e+10, 0, 8.6976665e+23, 0, 0.00012219016, 1.0372148e-08, 6.3080874e-10, 0, 7, 27, 111, 5, 127.62504, 0, 0, 0, 8.6650199e-38, 0, 8.6650199e-38, 0, 8.6653068e-38, 0, 8.6653068e-38, 0, 0, 0, 0, 0, 8.6650512e-38, 0, 8.6650916e-38, 0, 0, 0, 0, 0, 0, 22.000002, 0, 0, 0, 0, 8.6651656e-38, 0, 0, 0, 8.6652015e-38, 0, 0, 0, 0, 5, 1.6213893e+09, 0, 0, 0, 12, 6, 0, 0, 144173.88, 0, 8.6651544e-38, 0, 0, 21.000174, 50, 31, 20, 20, 7, 78, 18, 69, 0, 0, 144173.88, 0 ...
(more)
2017-04-23 05:22:13 -0600 commented answer Replace a chain of image blurs with one blur

@Tetragramm Btw I don't really understand your point about the difference and the 255 scale. I read the image, they have value range 0-255, which is normal. I use the gaussianBlur, and the result is in 0-255. The diff is too in 0-255. Then, I display the image in this 0-255 system. What is the problem in all this? The convertTo(imageF_8UC3, CV_8U, 255); is used only when I use imwrite, not when I use imshow. So to see the true diff you propose to use diff /= 255 instead of diff, right?

2017-04-23 05:02:02 -0600 commented answer Replace a chain of image blurs with one blur

BUT if the difference was so small, my algorithm would generate almost the same number of keypoints, right? Instead using the code with chain of blurs (the correct, original version) generates 1760 keypoints, instead with one blur 2397! Something's is wrong here and I'm really stucked trying to understand what XD

2017-04-22 10:57:45 -0600 commented answer Replace a chain of image blurs with one blur

@Tetragramm from my understanding of your answer, you say that the blurs obtained from the chain version and the ones obtained with one shot are the same...But then why the number of produced keypoints is different? The only explanation is that the produced blurs are different.

2017-04-20 09:30:44 -0600 commented answer Replace a chain of image blurs with one blur

@Tetragramm I'm sorry, I'm missing your point. By "your problem" you mean how images are shown/saved, or why the single blur formula doesn't work? I'm sorry if I didn't understand your answer, if you can give me more details it would be much appreciated