Implementation of the cornerHarris of openCV2
static void
cornerEigenValsVecs( const Mat& src, Mat& eigenv, int block_size,
int aperture_size, int op_type, double k=0.,
int borderType=BORDER_DEFAULT )
{
int depth = src.depth();
double scale = (double)(1 << (aperture_size - 1)) * block_size; // #1 can't understand this line
//why don't just initialize it as "double scale = 1.0"?
if( depth == CV_8U )
scale *= 255.;
scale = 1./scale;
Mat Dx, Dy;
Sobel( src, Dx, CV_32F, 1, 0, aperture_size, scale, 0, borderType );
Sobel( src, Dy, CV_32F, 0, 1, aperture_size, scale, 0, borderType );
//......
}
void cv::cornerHarris( InputArray _src, OutputArray _dst, int blockSize, int ksize, double k, int borderType )
{
Mat src = _src.getMat();
_dst.create( src.size(), CV_32F );
Mat dst = _dst.getMat();
cornerEigenValsVecs( src, dst, blockSize, ksize, HARRIS, k, borderType );
}
These are the source codes from github(I remove some codes to make it easier to read).I can't understand why are they initialize the scale like that(#1)?I would simple initialize the scale as 1.
scale is only 1 in the case, where both aperture_size and block_size are 1
What if aperture_size and block_size are not 1?ex : aperture_size = 3, block_size = 5?Can't understand why are they normalize the result like that.