Ask Your Question

timbo's profile - activity

2015-07-11 23:08:16 -0600 commented question I am getting less calibration error with simpler models. How, and am I doing it wrong?

Fair enough. I will close it. I just thought it might be more relevant here.

2015-07-11 04:06:29 -0600 answered a question 6-Channel image and 3D Reconstruction and visualization

It seems to me like you have a point cloud. If you want to use OpenCV as ooposed to OpenGL or the point cloud library, you might find these links useful:

If this is what you want: https://www.youtube.com/watch?v=OR9XTCUNau0

Try looking here (at the sample code, which is in English): http://opencv.jp/opencv2-x-samples/point-cloud-rendering

2015-07-11 04:06:29 -0600 asked a question I am getting less calibration error with simpler models. How, and am I doing it wrong?

Crosspost at SO: http://stackoverflow.com/questions/31...

I'm getting results I don't expect when I use OpenCV 3.0 calibrateCamera. Here is my algorithm:

  1. Load in 30 image points
  2. Load in 30 corresponding world points (coplanar in this case)
  3. Use points to calibrate the camera, just for un-distorting
  4. Un-distort the image points, but don't use the intrinsics (coplanar world points, so intrinsics are dodgy)
  5. Use the undistorted points to find a homography, transforming to world points (can do this because they are all coplanar)
  6. Use the homography and perspective transform to map the undistorted points to the world space
  7. Compare the original world points to the mapped points

The points I have are noisy and only a small section of the image. There are 30 coplanar points from a single view so I can't get camera intrinsics, but should be able to get distortion coefficients and a homography to create a fronto-parallel view.

As expected, the error varies depending on the calibration flags. However, it varies opposite to what I expected. If I allow all variables to adjust, I would expect error to come down. I am not saying I expect a better model; I actually expect over-fitting, but that should still reduce error given my test set is also my learning set. What I see though is that the fewer variables I use, the lower my error. The best result is with a straight homography.

The code doesn't appear to have bugs; I've used "better" points and it works perfectly. I want to emphasize that the solution here can't be to use better points or perform a better calibration; the whole point of the exercise is to see how the various calibration models respond to different qualities of calibration data. The scenarios I'm looking at often have distorted lenses and only a few points in a single region to calibrate on.

Any ideas?

I've included code here. The different models are achieved by commenting out lines as indicated.

// Load image points
std::vector<cv::Point2f> im_points;
im_points.push_back(cv::Point2f(1206, 1454));
im_points.push_back(cv::Point2f(1245, 1443));
im_points.push_back(cv::Point2f(1284, 1429));
im_points.push_back(cv::Point2f(1315, 1456));
im_points.push_back(cv::Point2f(1352, 1443));
im_points.push_back(cv::Point2f(1383, 1431));
im_points.push_back(cv::Point2f(1431, 1458));
im_points.push_back(cv::Point2f(1463, 1445));
im_points.push_back(cv::Point2f(1489, 1432));
im_points.push_back(cv::Point2f(1550, 1461));
im_points.push_back(cv::Point2f(1574, 1447));
im_points.push_back(cv::Point2f(1597, 1434));
im_points.push_back(cv::Point2f(1673, 1463));
im_points.push_back(cv::Point2f(1691, 1449));
im_points.push_back(cv::Point2f(1708, 1436));
im_points.push_back(cv::Point2f(1798, 1464));
im_points.push_back(cv::Point2f(1809, 1451));
im_points.push_back(cv::Point2f(1819, 1438));
im_points.push_back(cv::Point2f(1925, 1467));
im_points.push_back(cv::Point2f(1929, 1454));
im_points.push_back(cv::Point2f(1935, 1440));
im_points.push_back(cv::Point2f(2054, 1470));
im_points.push_back(cv::Point2f(2052, 1456));
im_points.push_back(cv::Point2f(2051, 1443));
im_points.push_back(cv::Point2f(2182, 1474));
im_points.push_back(cv ...
(more)
2015-07-11 04:06:28 -0600 commented question How to find a "contour" of 3d object?

I agree with Mathieu and I'm giving this an upvote as interesting. How would you even expect the contour to be stored in memory? As a set of surface normals at each voxel? Or are you happy with just a 3D bitmap, setting edges to 1? Depending on how detailed you want it, it is much more difficult than the discrete 2D case where a chain code or similar can do the job.