Object detection using edge matching on android [closed]

asked 2016-04-03 14:52:46 -0600

Sap gravatar image

updated 2020-12-09 08:43:52 -0600

Hello, I'm having some problem with finding objects in the camera frame using edges or contours (in order to dismiss the background of the object) in android platform, I've tried to find keypoints in the Canny results of both the template image and the frame, and then matching them, but it doesn't work as I expected.

Is there a better way of doing it? If not please help me figure out what I'm missing.

Applying Canny on the reference image and finding keypoints:

Imgproc.Canny(mReferenceImage, referenceImageGray, 80, 90);
    mFeatureDetector.detect(referenceImageGray,
            mReferenceKeypoints);
    mDescriptorExtractor.compute(referenceImageGray,
            mReferenceKeypoints, mReferenceDescriptors);

Applying Cany on the frame and matching:

Imgproc.Canny(src, mGraySrc, 80, 90);
    mFeatureDetector.detect(mGraySrc, mSceneKeypoints);
    mDescriptorExtractor.compute(mGraySrc, mSceneKeypoints,
            mSceneDescriptors);
    mDescriptorMatcher.match(mSceneDescriptors,
            mReferenceDescriptors, mMatches);

After that a function is called to find the corners of the object:

private void findSceneCorners() {

    final List<DMatch> matchesList = mMatches.toList();
    if (matchesList.size() < 4) {
        // There are too few matches to find the homography.
        return;
    }

    final List<KeyPoint> referenceKeypointsList =
            mReferenceKeypoints.toList();
    final List<KeyPoint> sceneKeypointsList =
            mSceneKeypoints.toList();

    // Calculate the max and min distances between keypoints.
    double maxDist = 0.0;
    double minDist = Double.MAX_VALUE;
    for (final DMatch match : matchesList) {
        final double dist = match.distance;
        if (dist < minDist) {
            minDist = dist;
        }
        if (dist > maxDist) {
            maxDist = dist;
        }
    }

    // The thresholds for minDist are chosen subjectively
    // based on testing. The unit is not related to pixel
    // distances; it is related to the number of failed tests
    // for similarity between the matched descriptors.
    if (minDist > 50.0) {
        // The target is completely lost.
        // Discard any previously found corners.
        mSceneCorners.create(0, 0, mSceneCorners.type());
        return;
    } else if (minDist >= 25.0) {
        // The target is lost but maybe it is still close.
        // Keep any previously found corners.
        return;
    }

    // Identify "good" keypoints based on match distance.
    final ArrayList<Point> goodReferencePointsList =
            new ArrayList<Point>();
    final ArrayList<Point> goodScenePointsList =
            new ArrayList<Point>();
    final double maxGoodMatchDist = 1.75 * minDist;
    for (final DMatch match : matchesList) {
        if (match.distance < maxGoodMatchDist) {
           goodReferencePointsList.add(
                   referenceKeypointsList.get(match.trainIdx).pt);
           goodScenePointsList.add(
                   sceneKeypointsList.get(match.queryIdx).pt);
        }
    }

    if (goodReferencePointsList.size() < 4 ||
            goodScenePointsList.size() < 4) {
        // There are too few good points to find the homography.
        return;
    }

    // There are enough good points to find the homography.
    // (Otherwise, the method would have already returned.)

    // Convert the matched points to MatOfPoint2f format, as
    // required by the Calib3d.findHomography function.
    final MatOfPoint2f goodReferencePoints = new MatOfPoint2f();
    goodReferencePoints.fromList(goodReferencePointsList);
    final MatOfPoint2f goodScenePoints = new MatOfPoint2f();
    goodScenePoints.fromList(goodScenePointsList);

    // Find the homography.
    final Mat homography = Calib3d.findHomography(
            goodReferencePoints, goodScenePoints);

    // Use the homography to project the reference corner
    // coordinates into scene coordinates.
    Core.perspectiveTransform(mReferenceCorners,
            mCandidateSceneCorners, homography);

    // Convert the scene corners to integer format, as required
    // by the Imgproc.isContourConvex function.
    mCandidateSceneCorners.convertTo(mIntSceneCorners,
            CvType.CV_32S);

    // Check whether the corners form a convex polygon. If not,
    // (that is, if the corners form a concave polygon), the
    // detection result is invalid because no real perspective can
    // make the corners of a rectangular image look like a concave
    // polygon!

   if (Imgproc.isContourConvex(mIntSceneCorners)) {
        // The corners form a convex polygon, so ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by sturkmen
close date 2020-09-24 18:51:57.850757

Comments

1

You should add the images for the object and for the scene. Also, which feature detector and descriptor did you use ?

If you use features detection on edge images, I presume that the object you want to detect is textureless ? In my opinion, the problem you have is that feature matching needs texture which is the opposite of edge images. Did you try to display the matching to see if they are correct ?

If your object have texture, you should go with classical feature matching. Otherwise, I don't think that feature matching on edge images would work.

You could look at approach purely based on edges like Chamfer matching maybe?

Eduardo gravatar imageEduardo ( 2016-04-03 16:26:03 -0600 )edit

Classical features like SIFT, SURF, etc. need texture information.

But it seems that there is a feature (not present in OpenCV) for textureless object called BOLD features to detect texture-less objects.

There is in OpenCV cv::linemod: Object detector using the LINE template matching algorithm with any set of modalities. but I don't know it.

I don't know if it exists others.

Eduardo gravatar imageEduardo ( 2016-04-03 16:32:32 -0600 )edit

I used ORB for the detection. The items are kitchen objects such as fork, spoon, plate etc. I used edges because I need to ignore the background. I tried to do a regular matching and it doesn't work on different backgrounds or lighting conditions. Edges takes these factors out of the equation - that's why I tried it. Also, I tried to show the matches and for some reason nothing showed.

Sap gravatar imageSap ( 2016-04-03 16:41:11 -0600 )edit
1

again, most keypoint detectors rely on grayscale information, like gradients. you should not use Canny here, since it destroys this information (and generates "edges" instead).

berak gravatar imageberak ( 2016-04-04 01:55:33 -0600 )edit