Android application to recognize and track object from camera has more false positives?
I'm really new to openCV I followed this but detection too slow
mRgba matrix from camera input
mRef matrix from object image
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
detector.detect(mGray, keypoints);
detector.detect(mRef, keypointsRef);
extractor.compute(mGray, keypoints, extract);
extractor.compute(mRef, keypointsRef, extractRef);
matcher.match(extractRef, extract, matchs);
List<KeyPoint> keypoints_RefList = keypointsRef.toList();
List<KeyPoint> keypoints_List = keypoints.toList();
if (matchesList.get(i).distance <= (3 * min_dist)) {
good_matches.addLast(matchesList.get(i));
}
for (int i = 0; i < good_matches.size(); i++) {
objList.addLast(keypoints_RefList.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_List.get(good_matches.get(i).trainIdx).pt);
}
obj.fromList(objList);
scene.fromList(sceneList);
Mat hg = Calib3d.findHomography(obj, scene, 8, 2, new Mat());
Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
obj_corners.put(0, 0, new double[] {0, 0});
obj_corners.put(1, 0, new double[] {mRef.cols(), 0});
obj_corners.put(2, 0, new double[] {mRef.cols(), mRef.rows()});
obj_corners.put(3, 0, new double[] {0, mRef.rows()});
Core.perspectiveTransform(obj_corners, scene_corners, hg);
Core.line(mRgba, new Point(scene_corners.get(0,0)), new Point(scene_corners.get(1,0)), new Scalar(0, 255, 0),3);
Core.line(mRgba, new Point(scene_corners.get(1,0)), new Point(scene_corners.get(2,0)), new Scalar(0, 255, 0),3);
Core.line(mRgba, new Point(scene_corners.get(2,0)), new Point(scene_corners.get(3,0)), new Scalar(0, 255, 0),3);
Core.line(mRgba, new Point(scene_corners.get(3,0)), new Point(scene_corners.get(0,0)), new Scalar(0, 255, 0),3);
This is way too wrong it detects almost anything as output and draws random points as output.
Is there a way to refine the detection?
Can histogram, dialate or smooth options be applied to the images?
what, if it did not find any keypoints ?
I can draw keypoints of both reference and real time camera images using "features2d.drawKeypoints();" but "features2d.drawMatches();" does not produce any output.
I used it like this // / Works // Features2d.drawKeypoints(mGray, keypoints, mRef);
// / Does not work // Features2d.drawMatches(mGray, keypoints, mRgba, keypointsRef, gm, mRef, new Scalar(255, 0, 0), new Scalar(0, 0, 255), new MatOfByte(), 2);
and returned mRef even without homography and perspectiveTransform calculation.
I'm confused here is homography and perspective both essential to just draw the border of object?
findHomography needs >=4 matches. Maybe you sometimes dont have enough.
edit: This comment should be under the first answer.
I added condition
before forming objList and sceneList with query and train Idx