Ask Your Question

efanucar's profile - activity

2020-03-28 07:19:35 -0600 received badge  Notable Question (source)
2017-09-10 16:45:23 -0600 received badge  Nice Answer (source)
2017-05-01 08:00:47 -0600 received badge  Popular Question (source)
2013-09-09 06:37:06 -0600 answered a question findchessboardCorners result

I don't know if I understand your question correctly...but: The size "board" you set as the parameter for findChessboardCorners does not correspond to the squares but to the inner corner points. So if you want to cover 3 x 3 squares you need to set the size to (4,4) - you will then find 4 inner corner points made up by 5 squares.

2013-09-07 05:33:33 -0600 commented question rules for generating calibration pattern

you're right, I updated the question - but my main intention is to understand the rules not to solve the error of the python script

2013-09-07 04:23:14 -0600 commented question rules for generating calibration pattern

"It stops without an error message"

2013-09-07 03:54:30 -0600 asked a question rules for generating calibration pattern

Hi all!

We need a calibration pattern with outer dimensions of around 600 mm x 600 mm.

I tried to use the python script which can be found in the doc folder of the OpenCV distribution but it does not generate an svg of this size. It stops without an error message and does not write an svg file.

So I want to create the pattern on my own and want to understand the "rules" (and I want to understand them not only because the python script is not working for my size):

  • is it better to use a different count of rows and columns?
  • how many circles do I need for a good calibration pattern?
  • which radius should I use in relation to the outer dimensions?
  • which spacing is needed between the circles?
  • which spacing is needed between the outer circles and the border of the whole pattern?

Because I can not print a pattern of this size and have to pay for the printing, I need to know the rules and can not try many different things.

Here is a sample pattern generated by the python script (I guess you could resize it to the required final size), I only added a bounding rect manually. The command line:

./gen_pattern.py -c 6 -r 7 -o out.svg -T circles -u mm -s 50

So I use a square size of 50 and the script automatically sets the radius of the circles to 1/5 of this size => 10. This is hard-coded and there is a comment in the script: "#radius is a 5th of the spacing TODO parameterize"

Obviously this is adjustable. So, this is one of my questions: which is the perfect balance between spacing and radius?

circle pattern generated by python script

Thanks!

2013-09-07 03:20:40 -0600 commented question type and size of calibration pattern for most accurate results

And did they mention which relative size the pattern should have in the final image? I guess if it's too small the calibration is only valid for small parts of the image and not for the whole covered area?

2013-09-07 03:18:45 -0600 commented question building photoshop-style black-white-conversion

I already loop through all the pixels and apply a custom grayscale conversion (http://answers.opencv.org/question/12947/custom-grayscale-conversion/) but I want to understand how the weight for R, G and B can be calculated depending on the setting in Photoshop. How to they correspond?

2013-08-25 09:59:48 -0600 asked a question building photoshop-style black-white-conversion

Hi all!

I'm trying to separate some coloured object from my background. Before starting with OpenCV I'm using a test image in Photoshop to check which color channels or which combination of color channels are the best.

There is a filter "black-and-white" (which is in fact a grayscale converter) which offers 6 trackbars for each shade of the colors:
red, yellow, green, cyan, blue, magenta
reaching from -200 to +300

I then try to find the best combination of this 6 settings to seperate my object as good as possible (for example the background becomes black, the object white).
If I found the perfect combination, how will an algorithm in OpenCV look to rebuild this gray-scale conversion using the values found in Photoshop?

Thanks and regards!

2013-08-25 08:59:10 -0600 asked a question type and size of calibration pattern for most accurate results

Hi!

I want to use OpenCV for accurate measuring - so the first step should be the best camera calibration possible.
There are three types of camera calibration patterns:

  • chessboard
  • symmetric circles
  • asymmetric circles

Does anyone have some experience which type is the most accurate?
What is the advantage of sym and asym. circle pattern, if any?

Another important parameter is the relative size of the pattern in the camera image. Are there any rules for the minimum size? For example the pattern should cover 25% of the image or something like that?
Is there any minimum radius in pixels for the circles/width of the squares in the resulting image?

Thanks!

2013-05-14 12:21:41 -0600 received badge  Teacher (source)
2013-05-06 11:16:59 -0600 received badge  Scholar (source)
2013-05-06 04:59:52 -0600 asked a question custom grayscale conversion

I'm trying to improve my edge detection and found out, what is logical: some edges are detected better in the red channel, others in the green etc. I would like to use only one Canny-call and not three on each channel. When I use BGR2GRAY the mixture of the channels is not optimal for my camera live image.

Is it possible to change the weight of each channel during the conversion? Or is there any known custom algorithm for this?

Thanks!

2013-05-06 01:49:38 -0600 answered a question opencv/findContour crashes, v2.4.4, MS visual studio 2010. edit:damaged head.

Here is some code for using findContours:

using namespace cv;
using namespace std;

// ...

vector<vector<Point>> contours;
Mat imgGray, imgEdges;

// m is your input image

cvtColor(m, imgGray, CV_RGB2GRAY);
threshold(imgGray, imgEdges, threshold-value, 255, CV_THRESH_BINARY);

// erode, dilation, ...

findContours(imgEdges, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE );

You should try to make this work, because findContours is an elementary function which you will use often.

2013-05-06 01:29:50 -0600 commented answer Can't set resolution of video capture

Sorry, this is only working on Windows...you are using Linux as I realised now...

2013-05-06 01:27:20 -0600 answered a question Can't set resolution of video capture

I also had some problems with my logitech webcam and OpenCV capturing. I then used the videoInput Library and could set everything I needed. Give it a try!

2013-05-05 06:11:34 -0600 received badge  Supporter (source)
2013-05-04 14:51:12 -0600 answered a question opencv/findContour crashes, v2.4.4, MS visual studio 2010. edit:damaged head.

Try to use a new variable of the type Mat as result of your threshold call, looks like test (which is a clone of m) is not a binary image

2013-05-04 14:28:00 -0600 received badge  Student (source)
2013-05-04 13:18:09 -0600 asked a question measuring distance between two balls in millimeters - how to improve accuracy

Hi!

I'm currently learning OpenCV and my current task is to measure the distance between two balls which are lying on a plate. My next step is to compare several cameras and resolutions to get a feeling how important resolution, noise, distortion etc. is and how heavy these parameters affect the accuracy. If the community is interested in the results I'm happy to share the results when they are ready! The camera is placed above the plate using a wide-angle lens. The width and height of the plate (1500 x 700 mm) and the radius of the balls (40 mm) are known.

My steps so far:

  1. camera calibration
  2. undistorting the image (the distortion is high due to the wide-angle lens)
  3. findHomography: I use the corner points of the plate as input (4 points in pixels in the undistorted image) and the corner points in millimeters (starting with 0,0 in the lower left corner, up to 1500,700 in the upper right corner)
  4. using HoughCircles to find the balls in the undistorted image
  5. applying perspectiveTransform on the circle center points => circle center points now exist in millimeters
  6. calculation the distance of the two center points: d = sqrt((x1-x2)^2+(y1-y2)^2)

The results: an error of around 4 mm at a distance of 300 mm, an error of around 25 mm at a distance of 1000 mm But if I measure are rectangle which is printed on the plate the error is smaller than 0.2 mm, so I guess the calibration and undistortion is working good.

I thought about this and figured out three possible reasons:

  1. findHomography was applied to points lying directly on the plate whereas the center points of the balls should be measured in the equatorial height => how can I change the result of findHomography to change this, i.e. to "move" the plane? The radius in mm is known.
  2. the error increases with increasing distance of the ball to the optical center because the camera will not see the ball from the top, so the center point in the 2D projection of the image is not the same as in the 3D world - I will we projected further to the borders of the image. => are there any geometrical operations which I can apply on the found center to correct the value?
  3. during undistortion there's probably a loss of information, because I produce a new undistorted image and go back to pixel accuracy although I have many floating point values in the distortion matrix. Shall I search for the balls in the distorted image and tranform only the center points with the distortion matrix? But I don't know what's the code for this task.

I hope someone can help me to improve this and I hope this topic is interesting for other OpenCV-starters.

Thanks and best regards!

2013-05-03 02:20:04 -0600 answered a question Does the canny method apply Gaussian Blur?

I'm pretty sure that canny() does not apply any blur to the image. You have to do it before calling canny().

Take a look at bilateralFilter as an alternative to GaussianBlur - it keeps the edges sharper.

2013-05-02 07:01:56 -0600 received badge  Editor (source)
2013-05-02 06:44:45 -0600 asked a question calculate 3d pose of sphere based on 2d ellipse

Hello all,

at the moment I'm using OpenCV to detect balls in an image by finding the 2d ellipse of the balls.

I now need to find out the exact position of the ball in world coordinates - I know the radius of the balls. If the ball is lying close to the optical center of the camera then the position is quite accurate - but if the ball moves to the borders of the image, the position has a growing error, because of the perspective distortion of the camera - the ball is seen from the side for example.

I read several papers about this problem and there seems to be a solution (http://www.mie.utoronto.ca/labs/ciml/projects/rob_vision/Tony.pdf on page 4) but I'm not sure how to implement this.

My current idea:
1. remove radial distortion from the image (I have the intrinsic parameters of the camera)
2. detect the ellipse in the image
3. use some magic maths as written in the paper to estimate the 3d pose

Has anyone some expirience with this? I hope someone can point me in the right direction.

Thanks a lot!