Ask Your Question

JoeBroesel's profile - activity

2017-08-07 07:55:41 -0600 commented question how to make make bounding box around object?

This should help

2017-08-04 09:31:37 -0600 asked a question Feature Matching; Detection of multiple Object Instances

Hello, I would like to implement a feature-matching-approach for multiple object detection. In related questions [http://answers.opencv.org/question/17...] [http://answers.opencv.org/question/45...] a meanshift clustering for the feature points is recommended. In SO a python implementation is given. Is there a c++-equivalent for the approach from V. Gai. Especially the following MeanShift part?

import numpy as np
from sklearn.cluster import MeanShift, estimate_bandwidth

x = np.array([kp2[0].pt])

for i in xrange(len(kp2)):
    x = np.append(x, [kp2[i].pt], axis=0)

x = x[1:len(x)]

bandwidth = estimate_bandwidth(x, quantile=0.1, n_samples=500)

ms = MeanShift(bandwidth=bandwidth, bin_seeding=True, cluster_all=True)
ms.fit(x)
labels = ms.labels_
cluster_centers = ms.cluster_centers_

labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
2017-06-08 00:39:58 -0600 commented question Comparing Two Contours: Rotation invariant?

so how can this part of the code work if the extractor is invariant?

    for (int i = 0; i < 8; i++)
{
    RotateContour(contours_Trans,contours_Rotated,Angle,ptCCentre);
    TestContour= simpleContour(contours_Rotated);
    float dis = mysc->computeDistance( QueryContour, TestContour);      
    if(Angle>=360)
        Angle=0.0;
    else
        Angle+=45.0;

    if ( dis<bestDis )
    {
        bestMatch  = Angle;
        bestDis = dis;
     }

}

edit: Because if its invariant dist would be independent of the angle - or do I understand something wrong?

2017-06-07 13:45:36 -0600 asked a question Comparing Two Contours: Rotation invariant?

I found one approach for estimate the orientation of two contours here , which rotates one contour and checks the distance to the original. I changed the headers to

#include <opencv2/core.hpp>
#include <opencv2/shape.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv_modules.hpp>
#include <iostream>
#include <fstream>
#include <string.h>

and the main to:

int main(int argc, char* argv[])

It may be kind of a stupid question, but first of all i don't know, why the transformation of the contours should improve the result of computeDistance. Is the <cv::shapecontextdistanceextractor> not invariant to rotation and translation, because it does an internal fit?

If this would be the case, my results would be coherent, because I always get 0 as distance (but unfortunately no image as well). Also the result from an other program, where i match rotated contours with cv::shapecontextdistanceextractor> as well as the hausdorff metric seems not to be wrong (small distances, but no exact 0).

2017-06-07 13:16:35 -0600 commented question Problem with estimateRigidTransform: mat dst is empty

thanks for your code! really impressive performance, but it will take a while until I understand everything in detail!

2017-06-07 03:26:03 -0600 commented question How to estimate transformation after hausdorff / shape context matching

I found one approach here. But i'm not sure, why the transformation of the contours should improve the result of computeDistance. Is the <cv::shapecontextdistanceextractor> not invariant to rotation and translation? Berak mentioned the internal fit.

2017-06-07 00:10:12 -0600 received badge  Enthusiast
2017-06-06 13:25:51 -0600 commented question Problem with estimateRigidTransform: mat dst is empty

Thank you, i will have a look. As a last question: is the order of points relevant for findHomography?

2017-06-06 07:53:53 -0600 commented question Problem with estimateRigidTransform: mat dst is empty

Thanks again! If I edit your dst Mat like described here, i was able to map the points in your example to the right corespondence with

 cv::perspectiveTransform(vertex1,result,H);

which means I am still not 100% sure if the order in vector matters -but it seems to be unstable or luck. Anyhow, if i push more than 5 points in vertex1 dst is getting empty again, even if the bool fullAffine is set to true. I think that the function is limeted to a few points. if there is not other way to estimate a rigid transform in opencv with a lot of unsorted points,can i use findHomography (perhaps as overkill)?

2017-06-06 04:22:39 -0600 commented question Problem with estimateRigidTransform: mat dst is empty

Thank you for your response, if i get you right the order of the Points in the vector is relevant?

I noticed that if I canged the line to

    cv::Mat R = cv::estimateRigidTransform(templatePoints2f,templatePoints2f,false);

or

    cv::Mat R = cv::estimateRigidTransform(queryPoints2f,queryPoints2f,false);

i get an output in the form of something like this: R:[1, -6.938148514913645e-16, 7.614916766799279e-14; 6.938148514913645e-16, 1, -1.285599231237722e-13]

instead of this: R:[]

Nevertheless the points are random_shuffled, aren't they? Would it help if i upload the used pictures (rotated apple-shapes from MPEG-dataset)?

2017-06-06 03:26:44 -0600 asked a question Problem with estimateRigidTransform: mat dst is empty

Hello everyone, I'm new to OpenCV, so it could be that it's just a understanding problem with the estimateRigidTransformation function:

In the following code i find the contours of two rigid translated objects in img1 and 2, but estimateRigidTransformation seems not to work like i thought it would. It would be nice if someone has an idea why the mat dst keeps empty. Thank you!

#include <iostream>
#include <string> 
#include <opencv2/highgui/highgui.
#include <opencv2/video/tracking.hpp>
//Function from https://github.com/opencv/opencv/blob/master/samples/cpp/shape_example.cpp to extract Contours
static std::vector<cv::Point> sampleContour( const cv::Mat& image, int n=300 )
{
   std::vector<std::vector<cv::Point>> contours;
   std::vector<cv::Point> all_points;
   cv::findContours(image, contours, cv::RETR_LIST, cv::CHAIN_APPROX_NONE);
   for (size_t i=0; i <contours.size(); i++)
   {
       for (size_t j=0; j<contours[i].size(); j++)
       {
          all_points.push_back(contours[i][j]);
       }
   }

   // In case actual number of points is less than n
   int dummy=0;
   for (int add=(int)all_points.size(); add<n; add++)
   {
       all_points.push_back(all_points[dummy++]);
   }
   // Uniformly sampling
   std::random_shuffle(all_points.begin(), all_points.end());
   std::vector<cv::Point> sampled;
   for (int i=0; i<n; i++)
   {
       sampled.push_back(all_points[i]);
   }
   return sampled;
}

int main(){
// image reading
cv::Mat templateImage = cv::imread("1.jpg", cv::IMREAD_GRAYSCALE);
cv::Mat queryImage = cv::imread("2.jpg", cv::IMREAD_GRAYSCALE);

// contour extraction
std::vector<cv::Point> queryPoints, templatePoints;
queryPoints = sampleContour(queryImage);
templatePoints = sampleContour(templateImage);

// cast to vector<point2f> https://stackoverflow.com/questions/7386210/convert-opencv-2-vectorpoint2i-to-vectorpoint2f
std::vector<cv::Point2f> queryPoints2f, templatePoints2f;
cv::Mat(queryPoints).convertTo(queryPoints2f, cv::Mat(queryPoints2f).type());
cv::Mat(templatePoints).convertTo(templatePoints2f, cv::Mat(templatePoints2f).type());

cv::Mat R = cv::estimateRigidTransform(templatePoints2f,queryPoints2f,false);
std::cout <<"R:"  << R << std::endl; // R -> empty

/*
 * Solution from https://stackoverflow.com/questions/23373077/using-estimaterigidtransform-instead-of-findhomography
 * let the program crash
 *

cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);

H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);

H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;

std::vector<cv::Point2f> result;
cv::perspectiveTransform(templatePoints2f,result,H);

for(unsigned int i=0; i<result.size(); ++i)
    std::cout << result[i] << std::endl;
*/
 return 0;
}
2017-06-04 12:52:22 -0600 commented question How to estimate transformation after hausdorff / shape context matching

Thank you berak, that was one information i was looking for. Are there any better alternatives than the shape module, which do a matching and detection without an additional step, and if not is estimateRigidTransform the right function?

2017-06-02 11:10:18 -0600 commented question How to estimate transformation after hausdorff / shape context matching

Thank you for your answer, does this mean with hausdorff you can only do a classification but no detection and is it possible with the ShapeContextDistanceExtractor?

2017-06-02 09:07:32 -0600 asked a question How to estimate transformation after hausdorff / shape context matching

A simular question is asked here, if i understood the question right:

How can you estimate the location and orientation of a rigid/affine transformed image, after you have extracted the distance and know that the compared images are simular. I tried estimateRigidTransform, after casting the vector<points> to vector<point2f> but the resulting Mat keeps empty.

Thank you for your help, the Shape context demo can be found here

2017-06-02 07:45:32 -0600 commented question Fast template matching Image Pyramids

Thank you for your help!

2017-06-02 07:10:16 -0600 received badge  Editor (source)
2017-06-02 04:06:05 -0600 asked a question Fast template matching Image Pyramids

Hello everyone, for a fast template matching with with varying sizes and orientations i often found the reference to this link, which unfortunately is broken. Does someone know if this example still exists? Please forgive if the question is too specific and thanks for your help

2017-06-02 03:30:17 -0600 answered a question how to overlay shapes in shape context/hausdorff matching.

Not the newest question, but i also couldn't find an answer to this. How can you estimate the location and orientation of a rigid transformed image, after you have extracted the distance and know that the compared images are simular. I tryed estimateRigidTransform, but the resulting Mat keeps empty.