Ask Your Question

WhoAmI's profile - activity

2018-05-07 09:51:20 -0600 received badge  Notable Question (source)
2017-06-19 14:27:03 -0600 received badge  Popular Question (source)
2016-09-21 07:36:11 -0600 commented question OpenCV Assertion Failed Error - Perspective Transform

Any help on this friends

2016-09-21 03:43:07 -0600 commented question OpenCV Assertion Failed Error - Perspective Transform

No problem. I will try any approach that is suggested over here

2016-09-21 03:22:56 -0600 commented question OpenCV Assertion Failed Error - Perspective Transform

Same error. even if I try it with CV_32FC1

2016-09-21 03:07:00 -0600 asked a question OpenCV Assertion Failed Error - Perspective Transform

Before posting this question I followed the answers posted in this thread but it didn't helped me however the question is eactly the same one.

http://answers.opencv.org/question/18...

I am using OpenCV for Android Version 3.1.0. I tried to use AKAZE detector and AKAZE descriptor. When I run the code on my Emulator am getting the below error.

 OpenCV Error: Assertion failed (scn + 1 == m.cols) in void CV:perspectiveTransform(cv::InputArray, cv::OutputArray, cv::InputArray), 
file/Volumes/Linux/builds/master_pack-android/opencv/modules/core/src/matmul.cpp, line 2125

core::perspectiveTransform_10() caught cv::Exception: /Volumes/Linux/builds/master_pack-android/opencv/modules/core/src/matmul.cpp:2125: 
error: (-215) scn + 1 == m.cols in function void CV:perspectiveTransform(cv::InputArray, cv::

Am using

private final Mat firstCorners = new Mat(4,1, CvType.CV_32FC2)

private final Mat secondCorners = new Mat(4,1, CvType.CV_32FC2)

final Mat homography = Calib3d.findHomography(first, second, Calib3d.RANSAC, 1)

Core.perspectiveTransform(firstCorners, secondCorners, homography)

I used Cv_32FC2 as posted in the above thread answer but still am getting the same error.

Any help on this.

2016-09-20 07:50:01 -0600 commented question How to use AKAZE descriptor extractor with KAZE keypoint detector

Did you find a solution for this. I got the same error. Any help from your end.

2016-09-20 07:40:58 -0600 asked a question BRISK taking too much of time

Hello,

I developed an android application using BRISK detector and descriptor. When I try to load the application into my mobile it's taking too much of time to install.

Once the app is installed and when I try to open the app it's loading for 5 mins and in next mins it shutting down automatically.

Does BRISK doesn't work on mobiles like Nexus 5?

I am using BRISK detector and descriptor along with BRUTEFORCE matcher.

2016-09-12 06:53:53 -0600 asked a question Saving Mat Image in Android Gallery

I want to save the Mat Images in android gallery automatically. When I run my code in an Emulator the resultant Mat Image is not saving in Gallery.

The code for saving Images is below

String Folder = Environment.getExternalStorageDirectory().getPath()+"/Gallery";
String timestamp = new SimpleDateFormat("yyyyMMdd_HHmmss").Format(new Date());
Highgui.imwrite(Folder + "/" + "Name_" + timestamp + ".png", outputImage);

--It didn't worked and so I searched for an answer in this Forum and tried the answer from the below post

http://answers.opencv.org/question/68...

> //subimg -> your frame

    Bitmap bmp = null;
    try {
        bmp = Bitmap.createBitmap(subimg.cols(), subimg.rows(), Bitmap.Config.ARGB_8888);
        Utils.matToBitmap(subimg, bmp);
    } catch (CvException e) {
        Log.d(TAG, e.getMessage());
    }

    subimg.release();


    FileOutputStream out = null;

    String filename = "frame.png";


    File sd = new File(Environment.getExternalStorageDirectory() + "/frames");
    boolean success = true;
    if (!sd.exists()) {
        success = sd.mkdir();
    }
    if (success) {
        File dest = new File(sd, filename);

        try {
            out = new FileOutputStream(dest);
            bmp.compress(Bitmap.CompressFormat.PNG, 100, out); // bmp is your Bitmap instance
            // PNG is a lossless format, the compression factor (100) is ignored

        } catch (Exception e) {
            e.printStackTrace();
            Log.d(TAG, e.getMessage());
        } finally {
            try {
                if (out != null) {
                    out.close();
                    Log.d(TAG, "OK!!");
                }
            } catch (IOException e) {
                Log.d(TAG, e.getMessage() + "Error");
                e.printStackTrace();
            }
        }
    }

It too didn't worked out for me.

I searched for the permissions in AndroidManifest file and it is fine.

Any words from you......

2016-09-06 05:16:40 -0600 commented answer Saving OpenCV images in Android

Hey @berak, Thanks for your comment. The program is being executed but I can't see the Image in my Gallery. The Image is not shown in Gallery. Any word on this?

2016-08-18 08:56:09 -0600 asked a question Saving OpenCV images in Android

I have the Image with DrawMatches() between two Images (one from the database and the other from the user captured Image in his mobile). I need to store this Image automatically in my gallery without deletion of the prior one.

I know that on Desktop I can store using

Highgui.imwrite("C://Users//name.png", resImg);

Can anyone help me out on how to store this Image in android gallery please

2016-08-16 08:42:44 -0600 commented question Image recognition on Android

If you are much familiar with Java then you can go with it. C++ provides much faster and better Memory managemant as far as I know.

2016-08-16 08:36:43 -0600 commented question OpenCV print descriptor value

ORB gives 256 bit value which is of 32 Bytes but not 128 bits

2016-08-09 08:47:04 -0600 answered a question ORB Feature Descriptor Official Paper Explanation

Any help on this?

2016-08-09 02:13:22 -0600 commented answer ORB based comparision -- How to remove the junk matches in the second image

Have you tried RANSAC with ORB to remove the junk matches?

2016-08-08 06:47:24 -0600 commented question ORB Feature Descriptor Official Paper Explanation

Can anyone explain me about this?

2016-08-05 07:50:13 -0600 asked a question DrawMatching between two images - image recognition

I was trying to show the matched keypoints between two Images (one that is captured from my camera and the other from the database)

Can anyone help me out in writing DrawMatches function in my code in order to Show (or else it can directly save into the memory of the mobile) the matched lines between 2 Images.

Here is my code:

public final class ImageDetectionFilter{

// Flag draw target Image corner.
private boolean flagDraw ;

// The reference image (this detector's target).
private final Mat mReferenceImage;

// Features of the reference image.
private final MatOfKeyPoint mReferenceKeypoints = new MatOfKeyPoint();

// Descriptors of the reference image's features.
private final Mat mReferenceDescriptors = new Mat();

// The corner coordinates of the reference image, in pixels.
// CvType defines the color depth, number of channels, and
// channel layout in the image. Here, each point is represented
// by two 32-bit floats.
private final Mat mReferenceCorners = new Mat(4, 1, CvType.CV_32FC2);

// Features of the scene (the current frame).
private final MatOfKeyPoint mSceneKeypoints = new MatOfKeyPoint();
// Descriptors of the scene's features.
private final Mat mSceneDescriptors = new Mat();
// Tentative corner coordinates detected in the scene, in
// pixels.
private final Mat mCandidateSceneCorners = 
new Mat(4, 1, CvType.CV_32FC2);
// Good corner coordinates detected in the scene, in pixels.
private final Mat mSceneCorners = new Mat(4, 1, CvType.CV_32FC2);
// The good detected corner coordinates, in pixels, as integers.
private final MatOfPoint mIntSceneCorners = new MatOfPoint();

// A grayscale version of the scene.
private final Mat mGraySrc = new Mat();
// Tentative matches of scene features and reference features.
private final MatOfDMatch mMatches = new MatOfDMatch();

// A feature detector, which finds features in images.
private final FeatureDetector mFeatureDetector = 
FeatureDetector.create(FeatureDetector.ORB);
// A descriptor extractor, which creates descriptors of
// features.
private final DescriptorExtractor mDescriptorExtractor = 
DescriptorExtractor.create(DescriptorExtractor.ORB);
// A descriptor matcher, which matches features based on their
// descriptors.
private final DescriptorMatcher mDescriptorMatcher = DescriptorMatcher
.create(DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);

// The color of the outline drawn around the detected image.
private final Scalar mLineColor = new Scalar(0, 255, 0);

public ImageDetectionFilter(final Context context,
final int referenceImageResourceID) throws IOException {

// Load the reference image from the app's resources.
// It is loaded in BGR (blue, green, red) format.
mReferenceImage = Utils.loadResource(context, referenceImageResourceID,
    Imgcodecs.CV_LOAD_IMAGE_COLOR);

// Create grayscale and RGBA versions of the reference image.
final Mat referenceImageGray = new Mat();
Imgproc.cvtColor(mReferenceImage, referenceImageGray,
    Imgproc.COLOR_BGR2GRAY);

Imgproc.cvtColor(mReferenceImage, mReferenceImage,
    Imgproc.COLOR_BGR2RGBA);

// Store the reference image's corner coordinates, in pixels.
mReferenceCorners.put(0, 0, new double[] { 0.0, 0.0 });
mReferenceCorners.put(1, 0, 
    new double[] { referenceImageGray.cols(),0.0 });
mReferenceCorners.put(2, 0,
    new double[] { referenceImageGray.cols(),
    referenceImageGray.rows() });
mReferenceCorners.put(3, 0,
    new double[] { 0.0, referenceImageGray.rows() });

// Detect the reference features and compute their
// descriptors.
mFeatureDetector.detect(referenceImageGray, 
    mReferenceKeypoints);
mDescriptorExtractor.compute(referenceImageGray, 
    mReferenceKeypoints,mReferenceDescriptors);
}

public void apply(Mat src, Mat dst) {

// Convert the scene to grayscale.
Imgproc.cvtColor(src, mGraySrc, Imgproc.COLOR_RGBA2GRAY);

// Detect the same features, compute their descriptors,
// and match the scene descriptors to reference descriptors.
mFeatureDetector.detect(mGraySrc, mSceneKeypoints);
mDescriptorExtractor.compute(mGraySrc, mSceneKeypoints,
    mSceneDescriptors);
mDescriptorMatcher.match(mSceneDescriptors, 
    mReferenceDescriptors,mMatches);

findSceneCorners();

// If ...
(more)
2016-08-04 08:58:33 -0600 asked a question ORB Feature Descriptor Official Paper Explanation

I was just reading the official paper of ORB from Ethan Rublee Official Paper and somewhat I find hard to understand the section of "4.3 Learning Good Binary Features"

I was surfing over the Internet to dig much deep into it and I found the below paragraph. I haven't getting the practical explanation of this. Can any of you explain me this in a simple terms.

"Given a local image patch in size of m × m, and suppose the local window (i.e., the box filter used in BRIEF) used for intensity test is of size r × r , there are N = (m − r )2 such local windows.

Each two of them can define an intensity test, so we have C2N bit features. In the original implementation of ORB, m is set to 31, generating 228,150 binary tests. After removing tests that overlap, we finally have a set of 205,590 candidate bit features. Based on a training set, ORB selects at most 256 bits according to Greedy algorithm."

What am getting from the official paper and from the above paragraph is that.

We have a patch size of 31X31 and select a size of 5X5.. We will have N=(31-5)^2 = 676 possible Sub Windows. Am not getting the lines which are marked in bold. What does it mean by removing test that overlap, we get 205,590 bit Features?

2016-08-04 03:06:45 -0600 asked a question Binary Decriptors in Feature Matching

Dear All,

I just wanted to learn about the feature detectors, descriptors and matchers.

I was clear with the detectors and descriptors after my research work and came to know that descriptors are used to describe the detectors found in an image. The detectors need be rotation, orientation and scale invariant. Every descriptors has a corresponding detector but not the vice versa as every feature can't be described using a descriptor. A clear explanation on this topic is described in opencv.org documents and I have read it.

Here is my doubt,

After reading this tutorial on Binary Descriptors

https://gilscvblog.com/2013/08/26/tut...

I got some idea on what it is. In short

1) 512 Pairs of a patch in Image A

2) Compare the intensity of each pair with the 1st value and with the 2nd value in the pair. If 1st value is higher place 1 or else place 0.

3) We will now have about 512 binary digits composing of 1's and 0's. Let it be

101010101010101010.....10101010

4) Same repeat the above 3 steps on a patch with different image 'B' and also we have 512 binary digits. Let's say it be

010101010101010101.....01010101

5) Now perform the hamming distance between those two binary strings (XOR operation) of Image A and B.

6) The result after performing the hamming distance is

111111111111111111.....11111111

Here are my questions?

1) What happens after this step?

2) How the matching lines are drawn from one image to another image. I came to know that we are using some distance matching something like that.

  • We will be setting a threshold value and if the value is below it then it is a perfect match and if the value is above the threshold value then it is wrong match.

How we are setting a threshold value to a random value?

I Just wanted to learn in practical what is applied in the image in order to draw the lines in between the descriptors (descriptors matchers - how it is worked).

You can just refer me to any online links/books where I can find about this.

Seeking for a help on this topic.

2016-07-28 04:47:09 -0600 commented question Circular Homography

Dear Berak, I edited my question. Is there any function in OpenCV to draw a line according to the shape of the Image in ORB?

2016-07-28 02:18:37 -0600 asked a question Circular Homography

As far as I know about the homography, It need to detect 4 corners in order to draw a rectangular shaped homography around the images.

In my case am using ORB to detect the images and I have Images which are circular and triangular in shape. I have to draw a shape which is in a circular and triangular shape around the circle and triangle Images. Is there any functions in which I can achieve my solution or any ideas on how to get circular and traingle shapes.

Based on the Image a line has to be drawn around it according to it's shape.

Thanks

2016-07-01 03:39:28 -0600 asked a question Fourier Transform vs Template Matching

Hello,

I was in search of a comparision between the Fourier transform and template matching.

I haven't find the exact drawbacks of Fourier tranform and the template matching in general.

Can any of you provide me the knowledge/links in comparision between them.

Thanks

2016-06-30 06:51:22 -0600 commented answer ORB 32byte Descriptor

Can I use CV_32F for an ORB?

2016-06-30 03:16:15 -0600 asked a question ORB 32byte Descriptor

Why is an ORB 32 Byte descriptor?

If I have 200 keypoints detected in an Image then will I get only one 32 Byte (256 bits) for all the 200 keypoints?

2016-06-29 04:17:23 -0600 received badge  Student (source)
2016-06-29 04:01:45 -0600 commented question How I detect marker using OpenCV for android

Just a grain of salt to your Project from my side.

I too used the same file as you posted over here in my project before. When I asked about the coding part here, I came to know that the corner's detection is wrong.

In your code // Store the reference image's corner coordinates, in pixels.

Is a wrong approach. I haven't figured it out how to modify it.

2016-06-29 02:37:41 -0600 asked a question ORB Descriptor

Hello,

Can I use ORB descriptor with any other detectors like AGAST / FREAK instead of ORB detector?

Does it yield in much more efficient results.

2016-06-28 03:15:16 -0600 asked a question ORB with AGAST

I would like to know whether I can use ORB descriptor with AGAST detector.

I know that BRISK uses AGAST detector for detecting the keypoints. Can I implement AGAST with ORB?

Does it work much efficient than ORB Detector/Descriptor.

Please enlighten me on this.

Thanks

2016-06-14 08:02:43 -0600 received badge  Critic (source)
2016-06-14 07:16:47 -0600 asked a question Technical reason behind ORB

Hello,

I am currently working on an ORB detector/descriptor for recognizing the Images which are captured from the camera and am a bit novice to this stuff.

Here goes my topic:

Eventhough I capture only the 60-70% of the image it is going to recognise what kind of Image it is. I would like to know what would be the technical concept involved behind it for recognising the image even if it is not fully captured. Is it because of the scale space pyramid or the threshold value? Until how much % (keypoints) can it be work?

Thanks,

2016-06-07 02:59:47 -0600 received badge  Enthusiast
2016-05-31 08:30:44 -0600 marked best answer Gaussian Filters with ORB

Hello All,

I am a student and currently doing my project in the field of Image recognition using Feature Point detectors and descriptors. I have no prior knowledge on the topics of Image recognition techniques before starting of this project and then I have researched on the available detectors and descriptors and came to know about the differences between them. Finally, I have opted out to work with the ORB detectors and descriptors for Image recognition (If it didn't worked according to my requiremnets then I would like to go out with the BRISK later).

As of now am in a stage of getting the results for Image recognition using ORB. At this Point, I was thinking of to use Gaussian Filters in my code so that I can get better results even though the Input Image is a bit blur.

My questions:

1) Is it possible to use Gaussian filters with ORB to get much better results for Image recognition?

2) When I read the paper on ORB I came to know that the lines below

FAST does not produce a measure of cornerness, and we have found that it has large responses along edges. We employ a Harris corner measure [11] to order the FAST keypoints. For a target number N of keypoints, we first set the threshold low enough to get more than N keypoints, then order them according to the Harris measure, and pick the top N points. 

FAST does not producemulti-scale features. We employ a scale pyramid of the image, and produce FAST Features (filtered by Harris) at each level in the pyramid.

ORB provides the Harris Corner inorder to detect the corners in an image and is it worth for me to use Gaussian Filters along with ORB?

3) ORB uses only Harris Corner to detect the corners or any other?

Please let me know about this and just enlighten me on the above mentioned questions.

2016-05-31 08:30:44 -0600 commented answer Gaussian Filters with ORB

Dear Tetragramm, I thought of going with Gaussian filters as the accuracy of my ORB is just 60%. I just had a look on RANSAC and am thinking of working with RANSAC in ORB to remove the outliers as it will increase the accuracy. Does RANSAC in ORB improves the accuracy or it is not a good approach. Could you let me know how can I improve my ORB Performance please

2016-05-31 07:45:42 -0600 commented question Improving ORB/ORB accuracy on mobile with OpenCV

Dear Bertus, Any update on how to increase the accuracy of ORB detector/descriptor. I am currently working on the similar project now with ORB. You post depicted my current situation. The results are just 60% accurate for me and it is not detecting in most of the cases. Any tips from your side on how to increase the accuracy of ORB.

2016-05-31 05:06:08 -0600 commented question Description of this code for Grayscale Images

Dear Balaji, I just wanted to know about the conversions in this code like the author have taken grayscale and then converted to RGB and then BGR something like that. I don't want about those functions usage.

2016-05-31 03:35:40 -0600 edited question Description of this code for Grayscale Images

It is my first project on the Image recognition and my question might be bit silly for the experienced users in this field.

I came to know that the Grayscale Images are used for Image recognition as it has less complexity when compared with the color Images and also learned about the RGB's in Grayscale images.

The Images which are captured need to be converted into Grayscale representation for Image recognition.

I came across this code from some of the library books and I just don't know about the underlying concepts/descriptions of this code.

    // A grayscale version of the scene.
    private final Mat mGraySrc = new Mat();       -> *The author has taken gray scale Image*

    // The color of the outline drawn around the detected image.
    private final Scalar mLineColor = new Scalar(0, 255, 0);

    // Load the reference image from the app's resources.
 // It is loaded in BGR (blue, green, red) format.            -> *What's Happening over here*
    mReferenceImage = Utils.loadResource(context,
            referenceImageResourceID,
            Highgui.CV_LOAD_IMAGE_COLOR);

  // Create grayscale and RGBA versions of the reference image. ->*What's Happening over here*
    final Mat referenceImageGray = new Mat();
    Imgproc.cvtColor(mReferenceImage, referenceImageGray,
            Imgproc.COLOR_BGR2GRAY);
    Imgproc.cvtColor(mReferenceImage, mReferenceImage,
            Imgproc.COLOR_BGR2RGBA);

  // Store the reference image's corner coordinates, in pixels. ->*What's Happening over here*
    mReferenceCorners.put(0, 0,
            new double[] {0.0, 0.0});
    mReferenceCorners.put(1, 0,
            new double[] {referenceImageGray.cols(), 0.0});
    mReferenceCorners.put(2, 0,
            new double[] {referenceImageGray.cols(),
                    referenceImageGray.rows()});
    mReferenceCorners.put(3, 0,
            new double[] {0.0, referenceImageGray.rows()});

    // Detect the reference features and compute their
    // descriptors.
    mFeatureDetector.detect(referenceImageGray,
            mReferenceKeypoints);
    mDescriptorExtractor.compute(referenceImageGray,
            mReferenceKeypoints, mReferenceDescriptors);
}

@Override
public boolean apply(final Mat src, final Mat dst) {

    // Convert the scene to grayscale.       ->  *What's Happening over here*
    Imgproc.cvtColor(src, mGraySrc, Imgproc.COLOR_RGBA2GRAY);

The author taken the Grayscale Version of the Image and then loaded on with BGR am bit unclear with this stuff.

Can anyone over here please explain me about the conversions happening over here please.

I wanted to know the basic functionality of this part of code.