OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 07 Feb 2017 07:56:46 -0600Finding nearest non-zero pixelhttp://answers.opencv.org/question/125174/finding-nearest-non-zero-pixel/I've got a binary image `noObjectMask` (`CV_8UC1`) and a given point `objectCenter` (`cv::Point`). If the `objectCenter` is a zer-value pixel, I need to find the nearest non-zero pixel starting from the given point.
The number of non-zero points in the whole image can be large (even up to 50%), so calculating distances for each point returned from `cv::findNonZero` seems to be non-optimal. As the highest probability is that the pixel will be in the close neighborhood, I currently use:
# my prototype script in Python, but the final version will be implemented in C++
if noObjectMask[objectCenter[1],objectCenter[0]] == 0:
# if the objectCenter is zero-value pixel, subtract sequentially its neighborhood ROIs
# increasing its size (r), until the ROI contains at least one non-zero pixel
for r in range(noObjectMask.shape[1]/2):
rectL = objectCenter[1]-r-1
rectR = objectCenter[1]+r
rectT = objectCenter[0]-r-1
rectB = objectCenter[0]+r
# Pythonic way of subtracting ROI: noObjectMask(cv::Rect(...))
rect = noObjectMask[rectL:rectR, rectT:rectB]
if cv2.countNonZero(rect)>0: break
nonZeroNeighbours = cv2.findNonZero(rect)
# calculating the distances between objectCenter and each of nonZeroNeighbours
# and choosing the closest one
This works okay, as in my images the non-zero pixels are typically in the closest neighborhood (`r`<=10px), but the processing time increases dramatically with the distance of the closest pixel. Each repetition of `countNonZero` repeats counting of the previous pixels. This could be improved by incrementing the radius `r` by more than one, but this still looks a bit clumsy to me.
How to improve the procedure? And ideas? -Thanks!mstankieTue, 07 Feb 2017 07:56:46 -0600http://answers.opencv.org/question/125174/Facerecognizerhttp://answers.opencv.org/question/4420/facerecognizer/
Hello,
I am using the facerecognizer EigenFaceRecognizer.
And I am using these instructions to get the label prediction:
Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE);
// Some variables for the predicted label and associated confidence (e.g. distance):
int predicted_label = -1;
double predicted_confidence = 0.0;
// Get the prediction and associated confidence from the model
model->predict(img, predicted_label, predicted_confidence)
In the predicted_label, I got the label of the person the facerecognizer thinks is the most nearest to the one in the test image.
My question is: Is there any way to get the 5 closest labels for the image to identify???
1. label "1" predicted_confidence = 90 %
2. Label "1" predicted_confidence = 85 %
3. Label "4" predicted_confidence = 80 %
4. Label "20" predicted_confidence = 70 %
5. Label "7" predicted_confidence = 65 %
Thank you.chalagadurWed, 21 Nov 2012 14:20:34 -0600http://answers.opencv.org/question/4420/iterative closest pointhttp://answers.opencv.org/question/1924/iterative-closest-point/I want to implement ICP(iterative closest point) algorithm
1. Associate points by the nearest neighbor criteria.
2. Estimate transformation parameters using a mean square cost function.
3. Transform the points using the estimated parameters.
4. Iterate (re-associate the points and so on).
![](http://dl.dropbox.com/u/8841028/ICP/center_alignment.png)
For every point in 1st set I found nearest point in 2nd set, but I don't understand how to do the 2nd step.
I tried folowing code, but it does't work, maybe I need to reject some pairs?
vector<Point> vec_pair;
for(int i=0;i<vec_M.size();++i)
{
double min_dist=INT_MAX;
int id=-1;
for(int j=0;j<vec_T.size();++j)
{
double metric= sqrt(double(vec_T[j].x-vec_M[i].x)*(vec_T[j].x-vec_M[i].x)+
(vec_T[j].y-vec_M[i].y)*(vec_T[j].y-vec_M[i].y));
if(min_dist>metric)
{
min_dist=metric;
id=j;
}
}
line(img,vec_M[i],vec_T[id],cvScalar(0,0,255));
vec_pair.push_back(vec_T[id]);
}
Mat m= estimateRigidTransform(vec_M,vec_pair,1);
cout<<m;
mrgloomThu, 30 Aug 2012 07:09:57 -0500http://answers.opencv.org/question/1924/