Implementation of the cornerHarris of openCV2

asked 2013-10-05 21:14:03 -0600

stereomatching gravatar image

updated 2013-10-06 08:21:20 -0600

static void
cornerEigenValsVecs( const Mat& src, Mat& eigenv, int block_size,
                     int aperture_size, int op_type, double k=0.,
                     int borderType=BORDER_DEFAULT )
{
  int depth = src.depth();
  double scale = (double)(1 << (aperture_size - 1)) * block_size; // #1 can't understand this line
  //why don't just initialize it as "double scale = 1.0"?

  if( depth == CV_8U )
      scale *= 255.;
  scale = 1./scale;      

  Mat Dx, Dy;            
  Sobel( src, Dx, CV_32F, 1, 0, aperture_size, scale, 0, borderType );
  Sobel( src, Dy, CV_32F, 0, 1, aperture_size, scale, 0, borderType );      

//......
}

void cv::cornerHarris( InputArray _src, OutputArray _dst, int blockSize, int ksize, double k, int borderType )
{
    Mat src = _src.getMat();
    _dst.create( src.size(), CV_32F );
    Mat dst = _dst.getMat();
    cornerEigenValsVecs( src, dst, blockSize, ksize, HARRIS, k, borderType );
}

These are the source codes from github(I remove some codes to make it easier to read).I can't understand why are they initialize the scale like that(#1)?I would simple initialize the scale as 1.

edit retag flag offensive close merge delete

Comments

scale is only 1 in the case, where both aperture_size and block_size are 1

berak gravatar imageberak ( 2013-10-06 08:49:11 -0600 )edit

What if aperture_size and block_size are not 1?ex : aperture_size = 3, block_size = 5?Can't understand why are they normalize the result like that.

stereomatching gravatar imagestereomatching ( 2013-10-06 08:57:21 -0600 )edit