Ask Your Question

Abhinav's profile - activity

2017-02-28 05:15:46 -0600 received badge  Popular Question (source)
2014-12-09 13:48:02 -0600 marked best answer Where to learn basic principles used in Feature/Template detection ?

I''ve learnt basics of OpenCV and learnt how to write code which does something bit more complex than HelloWorld program. Anyways, In this meanwhile I've learnt how to write Feature and template based detection program using supported list of algorithms(ORB, FAST, SURF etc). In this I've learned how to detect features, how to detect descriptors, how to find matches based on descriptors and at last how to draw them using drawMaches().

Now I want to replace this drawMatches() with some custom logic...but I don't know what are the fundamental principles used by this drawMatches() and also I'm unaware what type/kind of data/information these keypoints and desciptors do carry.

So I'm looking some help, to understand these things fundamentally. I'm absolutely unaware where should I start from, so please provide pointers/information/url from where I can do my head-start.

2013-03-12 13:47:57 -0600 commented answer How to build and integrate nonfree modules for Android?

thank you, your tip worked! :)

2013-03-11 15:16:19 -0600 asked a question How to build and integrate nonfree modules for Android?

I want to try and test different detectors and extractors available for OpenCV. In the process, I learned about SURF & SIFT and also found that they are now patented and kept in nonfree module.

Now I want to give a try to those modules and hence I built source(opencv) manually; But I got result files as *.dylib(http://snag.gy/3SHeT.jpg) Now how can I use(link/load/include) these files in Android C/C++'s code? Please correct me, if I'm assuming or doing something wrong.

I've googled enough about this, but couldn't found any relevant link, explaining this process throughly..so please help.

It would be awesome, if you can also help me to know which files one should include to use GFTT_DETECTOR, HARRIS_DETECTOR, SIMPLEBLOB_DETECTOR, HARRIS_DETECTOR, GRID_DETECTOR and other if any.

2013-02-27 11:05:03 -0600 commented answer How to process android image bytes->OpenCV Mat?

And one more thing, does YUV formatted bytes need conversion like to BGR something before doing local-opencv processing?

2013-02-27 10:11:12 -0600 commented question How to process android image bytes->OpenCV Mat?

Making question straight: How can I write frames to disk using OpenCV, where I'm getting frames via Android Camera, in form of bytes. And format of those end-frames-written-on-disk(ie. jpg/png etc) is immaterial for me. Please also guide about how to write imwrite() call on such Mat.

2013-02-26 06:25:15 -0600 asked a question OpenCV: Issue in running same-code on Android vs OSx

I've wrote simple template-matching program using OpenCV, which produces surprisingly different results on Android and OSx.

First, see what I'm doing:

 IplImage *image = cvLoadImage("test3a.png", -1);
 Mat templateMat(image);

 // detecting keypoints
 OrbFeatureDetector detector(500);
 std::vector<KeyPoint> templateKeypoints;
 detector.detect(templateMat, templateKeypoints);

 // computing descriptors
 Mat templateDescriptors;
 OrbDescriptorExtractor extractor;
 extractor.compute(templateMat, templateKeypoints, templateDescriptors);

 // matches
 BFMatcher matcher(cv::NORM_HAMMING2);
 std::vector<std::vector<DMatch> > matches;
 matcher.knnMatch(templateDescriptors, templateDescriptors, matches, 2);

Now next see what I'm getting:

Running same snippet on Nexus i9250 running Android 4.2.2 and on OSx 10.7(Lion) give these results:

• Mat Objects: Same on both OSes
• Keypoints: On Android | On Mac | Difference
• Descriptors: On Android | On Mac | Difference
• Matches: On Android | On Mac | Difference

NOTE: There is no difference, if I sort these files; So what I'm not getting is, why I'm getting different ordered results?? Geting them in order is my requirement, as I need such for further computations. Further, running same code-snippet on same platform always produces same ordered results.

Given links contains textual representation of descriptors, keypoints etc variables.

2013-02-26 02:42:36 -0600 commented question Issue while doing simple Template based feature detection

Sorry but I can't recall, what I fixed...but yeah, the issue was because both 'cam' & 'object' were different types of images...so you need to fix there types by first making empty Mat image and than copy data to that empty-mat.

This is hack, I discovered, so please don't follow this as righteous way of doing things. Update me, if you find any findings on your side. :)

2013-02-24 04:20:34 -0600 commented answer Weird issue: Output of running same code on Android & OSx differs

I've tried this again with some other image(png) and again getting different result. I've further dig into keypoints and got this finding. Printing keypoints on OSx http://bpaste.net/show/6nCXv9WAS8D7zOwNWtb7/, printing keypoints on Android: http://bpaste.net/show/J4ALq14k17EW2nodT8oA/ ; Image used: http://snag.gy/G5XGJ.jpg

2013-02-23 21:56:02 -0600 commented answer Weird issue: Output of running same code on Android & OSx differs

made sense! Thank you. :)

2013-02-23 21:55:48 -0600 received badge  Scholar (source)
2013-02-23 13:36:44 -0600 commented answer How to process android image bytes->OpenCV Mat?

My target device is Samsung Galaxy Nexus i9250, with Android 4.2.2. Camera params are: params.setPreviewSize(1920, 1080) that's it, rest(if any) are default. And pardon about last check, I couldn't get you that how to write frame size?

2013-02-23 13:27:34 -0600 asked a question Weird issue: Output of running same code on Android & OSx differs

I've written some simple snippet in C++ and ran that on Android as well on OSx with same inputs and the result I got is different on both platform. Now the only thing I could think of is, the way I'm assuming to test the output is wrong, so here is the snippet..

string file = "imgage.jpg"; // trust me, files on both machines are same
Mat inputFrame = imread(file);

OrbFeatureDetector detector(500);
std::vector<KeyPoint> keypoints;

Mat image;
cvtColor(inputFrame, image, CV_RGB2GRAY);
detector.detect(image, keypoints);

cout << keypoints.size() << endl; //

Problem

cout << keypoints.size() << endl; // this gives different ouput running for same image on Android and OSx.

I ran this test on 8 different types of images and it runs fine except for the following image. Please ignore commenting on kind of image, this is just some random test.

image description

Running provided test on Android gives 176 keypoints while running on OSx gives 172 keypoints.

2013-02-22 19:29:07 -0600 commented answer How to process android image bytes->OpenCV Mat?

What you said make sense, but still after making the change you suggested I'm getting that same old cluttered image on the disk. End issue remains unsolved.

2013-02-22 09:47:29 -0600 asked a question How to process android image bytes->OpenCV Mat?

Hey Friend!

I'm capturing video on Android, which in return gives me frame by frame picture in form of byte-array(byte[]) which I directly send to OpenCV in C++. Here in OpenCV, I want to convert those bytes into Mat.

For the problem, I did my own research and found this is how we can do:

JNIEXPORT jintArray JNICALL Java_com_example_nativeWrite(JNIEnv *env,
    jclass thiz, jbyteArray iFrameBytes, jint width, jint height) {

    // Get native access to the given Java arrays
    jbyte* iFrameBytesInJBytes  = env->GetByteArrayElements(iFrameBytes, 0);

    // Prepare a cv::Mat that points to the raw data
    Mat emptyIFrame(height, width, CV_8UC1, (unsigned char *)iFrameBytesInJBytes);
    Mat iFrame;
    cvtColor(emptyIFrame, iFrame, CV_YUV420sp2RGB, 4);

    imwrite("/sdcard/Download/img.jpg", iFrame);

    env->ReleaseByteArrayElements(iFrameBytes, iFrameBytesInJBytes, 0);
}

PROBLEM: When I write image back to disk using iwrite(), it writes some cluttered image, which is full of some random lines/colors etc. So, I'm not getting what's going wrong in the conversion, or else please correct me if I'm understanding something wrong.

EDIT: Image written on disk looks like: image description

Second Edit & Update Now I don't care and not stick to CV_YUV420sp2RGB format, whatever it is, I just ignore it. Please share most basic way to read images(in yuv) to opencv-format such that it could be used with imwrite() and FeatureDetector.detect() calls.

My current flow is: Running video stream on Android-> Gives stram of yuv frames -> Receive them in C++ -> **NEED HELP HERE: read image/frame somehow -> frame should be read such that I can write it to disk using or can use it for further processing like detecting points with FeatureDetector.

2013-02-12 06:11:40 -0600 asked a question Issue while doing simple Template based feature detection

Thank you reading me,

In my dev setup, I'm using Android + OpenCV, where Android layer(Java) calls C++ code via JNI layer. So all of this code is written and getting executed in C++ called via JNI.

Mat object = imread("Template.png", CV_LOAD_IMAGE_GRAYSCALE);  *// load template*

OrbFeatureDetector detector( 500 );
std::vector<KeyPoint> kp_object;
detector.detect( object, kp_object );

OrbDescriptorExtractor extractor;
Mat des_object;
extractor.compute( object, kp_object, des_object );

BFMatcher matcher(cv::NORM_HAMMING2);

Mat frame = cam; *// I've tested, so please assume cam holds correct value*
cvtColor(frame, frame, CV_RGB2GRAY);

detector.detect( image, kp_image );
extractor.compute( image, kp_image, des_image );

*// Few checks to make sure, inputs are correct*
assertCheck(des_object.type() == des_image.type()); *// returns true*
assertCheck(images.cols == object.cols); *// returns true*

**matcher.knnMatch(des_object, des_image, matches, 2); // gives following error:

cv::error()(4826): OpenCV Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) in void cv::batchDistance(cv::InputArray, cv::InputArray, cv::OutputArray, int, cv::OutputArray, int, int, cv::InputArray, int, bool), file /home/reports/ci/slave/opencv/modules/core/src/stat.cpp, line 1803

libc(4826): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 4880 (Thread-6390)

Further, I've tested same code on OSx machine and it works fine...so please help me to understand what's cooking wrong with Android. Someone also have asked this problem earlier: http://answers.opencv.org/question/4454/matching-causes-assertion-error-features2d/ but no replies, so I'm reposting it with a positive hope. :)

2013-02-07 11:37:06 -0600 commented question How to print and verify that MatDescriptors are non-empty??

descriptors are not vectors, they are of Mat-type. So I guess, your pointer won't suffice.

2013-02-07 07:58:10 -0600 asked a question How to print and verify that MatDescriptors are non-empty??

For example, suppose I run following snippet

Mat descriptors;
std::cout << descriptors.empty << std::endl; // writes 0
extractor.compute( object, keypoints, descriptors );
std::cout << descriptors.empty << std::endl; //  writes 0

So, as you can see even after calling extractor.compute(), descriptors are empty. So, how can one figure out that compute() was called and processed successfully.

2013-02-06 05:37:09 -0600 asked a question Logs in Android.

I'm going through sample examples and found they are writing logs via 'LOGD()' call. I just want to know where these logs are getting written and how can I read them?? For example: Line#24 at http://bpaste.net/show/XrEgVBWP0H0Rw2IKQAOG/ (example taken from sdk/samples/face-detection/jni/DetectionBasedTracker_jni.cpp)