Ask Your Question

ElectronicEng2015's profile - activity

2020-03-17 03:38:20 -0600 received badge  Notable Question (source)
2018-11-06 12:51:51 -0600 received badge  Popular Question (source)
2017-02-14 01:07:39 -0600 received badge  Enthusiast
2017-02-13 05:36:04 -0600 asked a question How to construct a histogram representation suitable for SVM classification

Hello everyone!,

I have been working recently with the BRISK algorithm. It generates descriptors of 64 bytes length for each keypoint. After implementing an algorithm based on K-medoids algorithm, I could assign each descriptor to its corresponding VISUAL WORD (centroid or medoid) based on the minimum Hamming distance when comparing with a set of randomly generated centroids.

Now that I have Identify each descriptor with its corresponding "Visual Word", I want to create a histogram where the bins correspond to those VISUAL WORDS and the values they can take correspond to the amount of descriptors contained in each cluster.

From this point, I need to generate the corresponding histogram to be used by a classification method such as SVM, for object recognition and detection purposes. I am currently using OpenCV 3.1, and for building a histogram I am not sure how to proceed.

2016-11-08 22:07:40 -0600 commented question How to construct descriptors for MSER and then do the matching?

Hi Steven, I just read that after fitting the ellipses, as they present an orientation angle, it could be used to improve the affine invariance of the image that is being tracked. Now what I don't really understand is how to create the feature descriptor from the fitted ellipse, because I know I can extract the bounding box around that ellipse as a patch and then resize it to a fixed size image. I would appreciate if you can describe the method for feature descriptor creation. Thanks in advance

2016-11-08 05:32:12 -0600 asked a question How to construct descriptors for MSER and then do the matching?

According to what I have read, MSER feature detector identifies homogeneous stable regions in an image. To each of those regions it is possible to fit ellipses to them, so that I can identify the ellipses orientation respect to the vertical axis of the image, then perform affine transformation to rotate the image accordingly, extract the patch containing the ellipse and then create a symmetrical fixed size blob ( 30 pixeles x 30 pixeles, in my case).

From there on, I need to compute the descriptors of each of the blobs (affine normalization) and then compute the matching algorithm.

The question is how do I build up the descriptors from the blobs mentioned above? which matcher is it better to use for those descriptors? And finally, is it possible to simply take the center of the original fitted ellipses as the keypoints?

I thank in advance any help you can provide me

Best regards

2016-10-29 02:42:31 -0600 commented question perspectiveTransform error for ORB detector

I was checking in detail the performance of the homography matrix, and as you say it eventually becomes empty....so I just applied perspective transform when the matrix is not empty thank you very much

2016-10-28 07:17:09 -0600 received badge  Supporter (source)
2016-10-28 01:53:04 -0600 commented question perspectiveTransform error for ORB detector

I checked H as suggested, but it is a 3x3 matrix with valid values

2016-10-28 00:42:12 -0600 asked a question perspectiveTransform error for ORB detector

Hello everyone, I have been trying to draw a bounding area for an object from its corresponding features in a video scene, but when it comes to run the perspectiveTransform function, it launches the following error: OpenCV ERROR: Assertion Failed (scn+1==m.cols) in cv::perspectiveTransform

I ran a similar program with the SURF and SIFT detectors and they did not show that error.

I present to you the programming code as follows:

            BFMatcher matcher(NORM_HAMMING, false);                        
            vector< DMatch > matches; // Descriptor matches
            std::vector< DMatch > good_matches;
            matcher.match(descriptor, descriptor_f, matches);

            double minDist;
            double maxDist;

            filterResultsByDistance( &matches, &good_matches, 2, &minDist, &maxDist);
            drawMatches(picture, keypoints, frame, keypoints_f, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), std::vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

            /* Localize the object*/

            std::vector<Point2f> obj;
            std::vector<Point2f> scene; 

            for (size_t i = 0; i < good_matches.size(); i++)
            {
                //-- Get the keypoints from the good matches
                obj.push_back(keypoints[good_matches[i].queryIdx].pt);
                scene.push_back(keypoints_f[good_matches[i].trainIdx].pt);
            }

            Mat H = findHomography(obj, scene, RANSAC);

            //-- Get the corners from the image_1 ( the object to be "detected" )
            std::vector<Point2f> obj_corners(4);
            obj_corners[0] = cvPoint(0, 0); 
            obj_corners[1] = cvPoint(picture.cols, 0);
            obj_corners[2] = cvPoint(picture.cols, picture.rows); 
            obj_corners[3] = cvPoint(0, picture.rows);

            std::vector<Point2f> scene_corners(4);

            perspectiveTransform(obj_corners, scene_corners, H); // Error Assertion Failed (scn+1 == m.cols)
2016-09-20 23:06:58 -0600 received badge  Student (source)
2016-08-05 03:47:20 -0600 commented answer How to address a specific centroid obtained from the function connectedComponentsWithStats?

Thanks for your help, I also realized that it is possible to address each centroid point by row, being like:

centroids.row(i) as a point

2016-08-05 02:04:22 -0600 received badge  Editor (source)
2016-08-05 01:54:34 -0600 asked a question How to address a specific centroid obtained from the function connectedComponentsWithStats?

Hello everyone, I hope you can help me with a specific issue. I have been using the function connectedComponentsWithStats to extract the centroid of an object in an image. Here is part of my code,

    int num_objects = connectedComponentsWithStats(img, labels, stats, centroids);
    cout << "Object " << i << "with position: " << centroids.at<Point2d>(i) << endl;

when I try to debug this instruction I get the following error:

OpenCV Error: Assertion failed (elemSize() == (((((DataType<_Tp>::type) & ((512 - 1) << 3)) >> 3) + 1) << ((((sizeof(size_t)/4+1)16384|0x3a50) >> ((DataType<_Tp>::type) & ((1 << 3) - 1))2) & 3))) in cv::Mat::at, file c:\opencv310\build\include\opencv2\core\mat.inl.hpp, line 962

I am trying to obtain a point (x, y) which the definition type of each element is a double or CV_64F. When I run the following instruction, I can see all the values of all the found centroids. Nevertheless when I try to retrieve just one specific centroid, the centroid reference "centroids.at<point2d>(i)" just lanches the failure message

           cout <<"positions: " << centroids << endl;

I really appreciate any suggestion that can help me retrieve a particular centroid point information in the correct format (double)

Thank you in advance