Ask Your Question

Christoph Pacher's profile - activity

2015-03-21 14:26:45 -0600 commented answer FREAK experience requiered

just to give you a point of reference, i find with FAST/BRISK on an i5 Haswell processor about 150 kps per image and match them to the previous image in about 2ms. this is highly dependent on the image and the number of kps you find/match. Most of the other Matcher/Descriptor combinations took longer in my tests, up to 25ms.

2015-03-17 12:41:45 -0600 received badge  Teacher (source)
2015-03-17 11:55:00 -0600 commented answer FREAK experience requiered

By the way I had bad matching results when using FREAK as a descriptor

2015-03-17 11:32:16 -0600 commented answer convexityDefects computes wrong result

A am using a checkout from the master branch from 4 months ago. Thanks I will file a report.

2015-03-16 18:32:49 -0600 commented answer convexityDefects computes wrong result

please read my comments. I already tried inverting the hull ordering and got results from other contours where a wrong defect has a larger depth than the defect I am looking for. And even for the example i posted a defect depth of 2715 for a defect that is located directly ON the hull is a completely wrong result.

2015-03-16 13:12:15 -0600 received badge  Editor (source)
2015-03-16 13:11:32 -0600 answered a question FREAK experience requiered

FAST turned out to be the fastest on my Haswell machine on a 640x480 rgb image from the Kinect, but I think this could be dependent on the scene ( mine is a top down scene with just people in it ). Valid matches is again a whole other story. I am afraid you will have to test them all.

Are you planning on finding the face first and then track the features that you find in the area? otherwise I am not too sure if you will be able to match faces from a set of features found in a set of images of faces.

2015-03-16 13:01:07 -0600 received badge  Critic (source)
2015-03-16 13:00:24 -0600 commented answer convexityDefects computes wrong result

i cant find a hint for the correct ordering of the hull for convexityDefects() in the documentation. when reversing the order I get the following result:

h: [9, 44, 45, 49, 57, 59, 61, 63, 0, 8] 
d: [[8, 9, 1, 2715], [9, 44, 30, 4397], [45, 49, 46, 271], [49, 57, 52, 801], [57, 59, 58, 229], [59, 61, 60, 154], [61, 63, 62, 201], [0, 8, 5, 765]]

d[0] is again a result with wrong depth: 2715 even though the point is located on the hull. its just less obvious since the value is smaller then the defect I am looking for. I already tried reversing the hull order. As far as I remember my tests, the results change depending on form of the contour, it is possible to get a wrong result with a bigger depth than the one, I am looking for.

2015-03-15 14:30:17 -0600 commented question convexityDefects computes wrong result

convexityDefects() already computes the distance of the defect from the hull, its index 3 of the result Vec4i

2015-03-14 06:22:18 -0600 received badge  Student (source)
2015-03-13 13:40:12 -0600 asked a question convexityDefects computes wrong result

Hi,

when running the following code snippet:

vector<Point> c = { { 356, 339 }, { 355, 340 }, { 353, 340 }, { 350, 343 }, { 349, 343 }, { 347, 345 }, { 344, 345 }, { 343, 346 }, { 334, 346 }, { 330, 350 }, { 331, 350 }, { 332, 351 }, { 334, 351 }, { 335, 352 }, { 336, 352 }, { 338, 354 }, { 339, 354 }, { 340, 355 }, { 340, 354 }, { 341, 353 }, { 342, 353 }, { 343, 352 }, { 344, 352 }, { 345, 351 }, { 349, 351 }, { 350, 350 }, { 352, 350 }, { 353, 349 }, { 354, 349 }, { 355, 350 }, { 356, 350 }, { 357, 351 }, { 357, 353 }, { 354, 356 }, { 354, 357 }, { 352, 359 }, { 353, 360 }, { 350, 363 }, { 350, 364 }, { 351, 365 }, { 351, 366 }, { 353, 368 }, { 353, 369 }, { 355, 371 }, { 355, 372 }, { 356, 371 }, { 356, 369 }, { 358, 367 }, { 358, 366 }, { 361, 363 }, { 361, 357 }, { 362, 356 }, { 362, 354 }, { 364, 352 }, { 364, 351 }, { 367, 348 }, { 368, 348 }, { 369, 347 }, { 367, 345 }, { 367, 343 }, { 366, 343 }, { 363, 340 }, { 359, 340 }, { 358, 339 } };

Mat img(480, 640, CV_8UC3, Scalar(0, 0, 0));
polylines( img, c, true, Scalar( 255, 0, 0 ), 1 );

vector<int> h;
convexHull( c, h );
vector<Point> hp;
for( const auto &e : h )
{
    hp.push_back( c[ e ] );
}
cv::polylines( img, hp, true, Scalar( 0, 255, 0 ), 1 );

vector<Vec4i> d;
convexityDefects( c, h, d );

for( const auto &e : d )
{
    circle( img, c[ e[ 2 ] ], 1, Scalar( 0, 0, 255 ), 1 );
}

I am getting the follwing output: image description

h: [57, 49, 45, 44, 9, 8, 0, 63, 61, 59]

d: [[57, 59, 9, 9273], [59, 61, 60, 154], [61, 63, 62, 201], [0, 8, 5, 765], [9, 44, 30, 4397], [45, 49, 46, 271], [49, 57, 52, 801]]

I am looking for the defect that is the farthest away from the hull which is: [9, 44, 30, 4397] which is located at the center of the image

The problem is, there is a wrong result: [57, 59, 9, 9273] which is even further away. In the image this wrong result is the the most left cross which is located on the hull and should have a depth of 0 and not 9273

Am I doing something wrong or is this a bug?

Thanks Chris

Edit: I created this bug report over there: click

2014-10-02 10:06:48 -0600 commented question OpenCV 3.0 tracking API multiple objects

@StevenPuttemans I looked at the documentation as well as the source and the provided sample and still had questions about the code, since I did not see any comments about how to handle multiple targets. I have to decide between multiple approaches to how to tackle my problem. Testing everyone of them is too time consuming especially if it is code that is only in the development trunk. I was hoping to reach persons that used or worked on the concerning code so they write their opinion in one sentence, as a guide for me as well as others having the same question. An effort that is a lot less than me looking at the source and example. Closing this thread is an overreaction. Seriously.

2014-10-01 13:18:56 -0600 asked a question OpenCV 3.0 tracking API multiple objects

Hi, please excuse my laziness for not trying this out by myself but I am little tied up and don't have the time to get a test project up and running. The documentation of the API does not mention how to handle multiple objects. Is it possible to do multiple init calls on one tracker instance or should there be one tracker for every object? After looking briefly at the source I think it is the latter.

2014-09-17 14:01:04 -0600 commented question static linking opencv_highgui300.lib, cvConvertImage unresolved

thanks @berak, just wanted to post this as an answer too, after i realized that the function is now in imgcodecs

2014-09-17 13:07:30 -0600 asked a question static linking opencv_highgui300.lib, cvConvertImage unresolved

Hi,

I just upgraded my local copy of the opencv repository from the trunk from july to the current one. My application now throws a link error which I cannot resolve: Error 1 error LNK2001: unresolved external symbol cvConvertImage opencv_highgui300.lib(window_w32.obj)

Any ideas? Thanks Christoph

2014-09-17 09:30:26 -0600 commented answer Create_samples, compiling with static libraries, linker error in highgui module, createToolBarEx

I have the same problem with the opencv trunk from 140917. unfortunately I am still missing something after linking to comctl32.lib according to the linker error: opencv_highgui300.lib(window_w32.obj) : error LNK2001: unresolved external symbol cvConvertImage do you perhaps know what is missing?