Ask Your Question

flohei's profile - activity

2020-10-18 13:43:52 -0600 received badge  Notable Question (source)
2018-07-25 15:42:00 -0600 received badge  Notable Question (source)
2018-01-10 15:08:18 -0600 received badge  Nice Question (source)
2016-08-09 08:09:39 -0600 received badge  Popular Question (source)
2015-11-18 02:48:43 -0600 received badge  Popular Question (source)
2014-04-08 23:33:59 -0600 received badge  Famous Question (source)
2013-10-31 15:23:25 -0600 received badge  Notable Question (source)
2013-08-05 03:21:04 -0600 received badge  Popular Question (source)
2012-11-26 06:05:05 -0600 commented answer getAffineTransform, getPerspectiveTransform or findHomography?

Thanks for pointing that out. I got it working using getPerspectiveTransform and warpPerspective. That works just great.

2012-11-25 11:46:20 -0600 asked a question getAffineTransform, getPerspectiveTransform or findHomography?

Following scenario: I've taken an photo for further analysis. The photo contains a sheet of paper. First of all I'm trying to detect the corners of the image. Once I've got them I want to stretch/transform the image so that its corners fit a new Mat's corners. (Like as if I had scanned the image.)

Reading the documentation on the above mentioned functions I'm not quite sure what's right for my needs. getAffineTransform seems to only take three point pairs (which works quite well, but leaves the lower right corner untouched).

getPerspectiveTransform should use four point pairs and findHomography even more, right? So I guess that one of those would be the one I should go for. For now I did not manage to get it working, though. I'm using vector<Point2f> sourcePoints, destinationPoints;, fill them with the found corners and my calculated new points (which are basically [width, 0], [0, 0], [0, height] and [width, height] of the new Mat). After creating the two vectors I would create the transformation matrix using either getPerspectiveTransform or findHomography to finally pass it over to warpPerspective. The last step is the one that crashes my application with

OpenCV Error: Assertion failed (dims == 2 && (size[0] == 1 || size[1] == 1 || size[0]*size[1] == 0)) in create, file /Users/Aziz/Documents/Projects/opencv_sources/trunk/modules/core/src/matrix.cpp, line 1310 libc++abi.dylib: terminate called throwing an exception.

Since I'm not sure if it even is the right approach I'm trying, I would love to hear your opinion on this before I try to fix the error.

Thanks a lot!
–f

2012-11-22 18:24:55 -0600 commented answer How to create a binary image mat?

I'm using this with the Cocoa framework and I'm getting the image mat (or the grayscale mat, tried both) from an UIImage instance. Besides that, I'm applying the threshold like you do. When I print every single pixel's value to the console (as uchars) I'm not only getting 0s and 255s. I got that right, that I should only have two different values in binary images, right? I'm starting to question my understanding of the matter… ;)

2012-11-21 17:53:48 -0600 commented answer How to create a binary image mat?

I have values like 2, 91, 116, 55, 183, 119, 175, 210, … in there. When I parse the binary mat I do not actually have the grayscale mat anymore. But I think I could keep it to compare it side by side, once it's not in the middle of the night here… ;) Thanks for your assistance anyway!

2012-11-21 16:37:51 -0600 commented answer How to create a binary image mat?

Unfortunately, that does not seem to change anything.

2012-11-21 15:19:22 -0600 asked a question How to create a binary image mat?

Whats the right way to create a binary image?

I'm trying to convert an ordinary image mat to grayscale and apply a threshold afterwards like this:

// first convert the image to grayscale
cvtColor(imageMat, grayscaleMat, CV_RGB2GRAY);

// then adjust the threshold to actually make it binary
threshold(grayscaleMat, binaryMat, 100, 255, CV_THRESH_BINARY);

Shouldn't that create a mat that does only have 0s and 255s (as uchars) in it? At least that's how I understand it. Unfortunately, its not only 0s and 255s.

What am I doing wrong?

Thanks a lot!

2012-11-19 05:17:34 -0600 received badge  Student (source)
2012-11-19 04:23:08 -0600 received badge  Supporter (source)
2012-11-19 04:23:08 -0600 received badge  Scholar (source)
2012-11-19 04:22:58 -0600 commented answer Histogram: Count black pixel per column

Thanks, Michael. The second example does crash with OpenCV Error: Bad argument on the line histogram += …. But the first one does work fine for me. Thank you!

2012-11-16 10:24:28 -0600 asked a question Histogram: Count black pixel per column

Hi guys,

I'm just getting started with OpenCV and I was wondering if it's possible, using calcHist, to create a histogram that counts all the black pixel per column in a binary image. By column I mean every single pixel column of the image, so that I would use image.width bins. Every bin should then have a value like 60, 80, 1000, whatever, depending of how many of the pixels in that column were black. Is there a way to use the OpenCV method or do I have to implement it myself?

Thanks a lot!
–f