Ask Your Question

gandalfsaxe's profile - activity

2019-05-16 03:37:03 -0600 received badge  Notable Question (source)
2018-07-20 11:06:34 -0600 received badge  Popular Question (source)
2016-10-07 12:19:17 -0600 received badge  Scholar (source)
2016-06-18 03:50:53 -0600 commented question Calibration with findCirclesGrid - trouble with pattern width/height

Alright, thanks.

2016-06-17 09:04:14 -0600 received badge  Editor (source)
2016-06-17 09:02:26 -0600 commented question Calibration with findCirclesGrid - trouble with pattern width/height

Yes you're right, sorry. I knew that one of them was correct, I just forgot when I posed the question (fixed now). Actually my question is: 1. Why is there no correct way to detection the image in the wide orientation? 2. If I then choose the tall orientation of the image, does it only work if I had the actual image printout in the tall orientation, when posing it in front of the camera?

2016-06-17 04:21:47 -0600 received badge  Enthusiast
2016-06-15 12:57:05 -0600 asked a question Calibration with findCirclesGrid - trouble with pattern width/height

Hi,

I'm calibrating a GoPro Hero 4. I have calibrated using chessboard pattern. I would like to see if I can get a better calibration with the circles instead. However I'm having some problems determining the correct width/height.

Here is the pattern I'm using (generated with the included gen_pattern.py): image description

I'm using command: findCirclesGrid(img, (width, height), cv2.CALIB_CB_ASYMMETRIC_GRID).

I have tried all possible combinations of any way you could count the width (14 or 7) and height (5 or 10), and any combination of the two. I also tried rotating the image. It detected the pattern in the following settings:

Wide image, width=7, height=10: image description

Looks correct at first glance, but seeing how the dots connect, it's wrong.

Wide image, width=5, height=14: image description

Clearly wrong.

Tall image, width=7, height=10: image description

This is actually correct.

Tall image, width=5, height=14: image description

Again, clearly wrong.

So my questions are: 1. Why is there no correct way to detection the image in the wide orientation? 2. If I then choose the tall orientation of the image, does it only work if I had the actual image printout in the tall orientation, when posing it in front of the camera?

It seems somewhat less table to me than the chessboard, but I could be wrong?

What am I doing wrong? Thanks in advance.

2016-03-08 13:26:33 -0600 commented answer Entering >5 points into "findEssentialMat"?

That's what I was hoping for, thanks. Do you know how MEDS is different and pros/cons compared to RANSAC?

2016-03-07 08:36:42 -0600 asked a question Entering >5 points into "findEssentialMat"?

Hi,

I'm trying to figure out the relative pose (R, t matrices) of two cameras using two synchronized videos of the same scene viewed from different positions. As input I've selected points of the same object in corresponding frames of both videos, and findEssentialMat needs at least 5.

My question is: what happens if I feed in more than 5 problems? Ideally I'd like to feed in many more points than 5 points and get a more accurate estimate than just 5 points would give. That's what I hope. Otherwise I'd consider finding R,t from many pairs of 5 points of points, then taking the median or mean as the best estimate.

Thanks.

2016-02-09 04:00:18 -0600 commented question Python reference documentation for 3.x ?

I noticed the same. Python documentation is there for 3.0 beta, but not 3.1.

This was also asked on October 24th 2015, but no answer yet.

2016-02-09 04:00:18 -0600 answered a question Python reference documentation for 3.x ?

I noticed the same. Python documentation is there for 3.0 beta, but not 3.1.

2016-02-09 03:54:41 -0600 received badge  Supporter (source)