I've been working on BRISQUE IQA for Python and C++ for a while now. There is a set of code in the source code for C++ :
int scalenum = 2;
for (int itr_scale = 1; itr_scale<=scalenum; itr_scale++)
{
Size dst_size(orig_bw.cols/cv::pow((double)2, itr_scale-1), orig_bw.rows/pow((double)2, itr_scale-1));
Mat imdist_scaled;
resize(orig_bw, imdist_scaled, dst_size, 0, 0, INTER_CUBIC); // INTER_CUBIC
imdist_scaled.convertTo(imdist_scaled, CV_64FC1, 1.0/255.0);
Mat mu(imdist_scaled.size(), CV_64FC1, 1);
GaussianBlur(imdist_scaled, mu, Size(7, 7), 1.166);
Mat mu_sq(imdist_scaled.size(), CV_64FC1, 1);
mu_sq = mu.mul(mu);
//compute sigma
Mat sigma(imdist_scaled.size(), CV_64FC1, 1);
sigma = imdist_scaled.mul(imdist_scaled);
GaussianBlur(sigma, sigma, Size(7, 7), 1.166);
subtract(sigma, mu_sq, sigma);
cv::pow(sigma, double(0.5), sigma);
//compute structdis = (x-mu)/sigma
add(sigma, Scalar(1.0/255), sigma);
//cvAddS(sigma, cvScalar(1.0/255), sigma);
Mat structdis(imdist_scaled.size(), CV_64FC1, 1);
subtract(imdist_scaled, mu, structdis);
divide(structdis, sigma, structdis);
//cvDiv(structdis, sigma, structdis);
//Compute AGGD fit
double lsigma_best, rsigma_best, gamma_best;
structdis = AGGDfit(structdis, lsigma_best, rsigma_best, gamma_best);
So this is nothing major what's happening above, just gaussian blur, addition, division and multiplication operations. I tried to convert the above set to python as follows:
>
scalenum = 2
feat = []
im_original = im_.copy()
for itr_scale in range(scalenum):
im = im_original.copy()
im = im / 255.0
mu = np.zeros((im.shape[0], im.shape[1]), dtype="float64")
mu += 255.0
mu_ = cv2.GaussianBlur(im, (7, 7), 1.166)
mu = mu_.copy()
mu_sq = mu * mu
sigma = im * im
sigma = cv2.GaussianBlur(sigma, (7, 7), 1.166)
sigma = mu_sq - sigma
sigma = abs(sigma) ** 0.5
sigma = sigma + 1.0/255
structdis = mu - im
structdis /= sigma
'''
sigma = np.sqrt(abs(cv2.GaussianBlur(im*im, (7, 7), 1.166) - mu_sq))
structdis = (mu-im)/(sigma+(1.0/255))
'''
structdis = AGGDfit(structdis)
Now, the AGGDfit function has some operations where it finds the number of positive pixel points and the negative pixel points, so for both the total number of negative and positive pixel points differ by a small amount (around 300-400). Why would that be? Is there any difference possible in the outputs of GaussianBlur of C++ and Python APIs?