OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 02 Aug 2020 12:28:00 -0500Coordinate frame transformation from NED to OpenCV convention, and the Pinhole Camera Modelhttp://answers.opencv.org/question/233204/coordinate-frame-transformation-from-ned-to-opencv-convention-and-the-pinhole-camera-model/ Hello,
I'm working on an SFM module where I have camera coordinates wrt a world coordinate system (in NED). I have the calibrated camera intrinsics matrix, and with the R,T of the camera wrt world, I compute the Projection matrix (intrinscs matrix * R|T) for each image, and camera position. However, I'm not sure if this Projection matrix is right since the R|T is in NED, and openCV uses a different convention.I'm trying to use the CV::SFM::triangulatePoints, but I'm not getting consistent results.
How will this affect the projection matrix, and is the conversion of the poses to the OpenCV frame necessary?
leafdetSun, 02 Aug 2020 12:28:00 -0500http://answers.opencv.org/question/233204/Hey, i want to dewarp a fisheye image to normal fisheye model.http://answers.opencv.org/question/219011/hey-i-want-to-dewarp-a-fisheye-image-to-normal-fisheye-model/i dont know how to do it is their any code available by which i can do this or can anyone help me to do this task ?Thanks raghavMon, 30 Sep 2019 02:43:31 -0500http://answers.opencv.org/question/219011/3D reconstruction (SFM) with multi-lens camera system (instead of pinhole camera model)http://answers.opencv.org/question/173406/3d-reconstruction-sfm-with-multi-lens-camera-system-instead-of-pinhole-camera-model/3D reconstruction (especially SFM algorithms) are often related with pinhole camera models.
The state-of-the-art of these SFM techniques is to look where the rays of 2D-3D correspondences in two different cameras intersect in object space.
This enforces that the camera model is a pinhole model (where the 2D-3D ray is just a straight line).
But often in real world there are multiple lens system used, where you can't really figure out the ray of 2D-3D correspondence.
**My question is:** *How does the SFM technique works with such multiple lens camera systems?*mirnyyFri, 01 Sep 2017 06:24:13 -0500http://answers.opencv.org/question/173406/Bad behavior using cv::undistort with newCameraMatrixhttp://answers.opencv.org/question/65826/bad-behavior-using-cvundistort-with-newcameramatrix/I calibrated a wide angle camera (~120° horizontal, ~110° vertical) using the Pinhole camera model. The result is quite good but large areas of the image are discarded.
![image description](/upfiles/14363801098133009.png)
So I'm trying to use the getOptimalNewCameraMatrix function (using **alpha = 1**) to calculate the newCameraMatrix and then pass it to cv::undistord, but the resulting frame is really distorted.
![image description](/upfiles/14363800349043121.png).
The code I use is:
// Matrices calculation
data.rms = calibrateCamera(objectPoints, imagePoints, data.size, data.cameraMatrix,
data.distCoeffs, data.rvecs, data.tvecs, settings.calibFlags);
if(!settings.cropCorrected) {
Rect roi;
data.newCameraMatrix = getOptimalNewCameraMatrix(data.cameraMatrix, data.distCoeffs,
data.size, 1, data.size, &roi, true);
}
// Undistort
cv::undistort(in, corrected, cameraMatrix, distCoeffs, newCameraMatrix);
This is simple but there is something not working. I have already tried to use **roi** to crop the inner area, in the hope that it would have been anyway bigger than without the additional matrix, but in fact it's not.
How can i resolve the situation?JavelinWed, 08 Jul 2015 13:30:17 -0500http://answers.opencv.org/question/65826/Pinhole calibration model reduces FOV, should i use Fisheye?http://answers.opencv.org/question/65809/pinhole-calibration-model-reduces-fov-should-i-use-fisheye/I have a wide angle camera, the specifications say it is around 150°, but to me seems more ~100° horizontal and ~80° vertical. Anyway, once calibrated these fields of view are reduced by ~20 degrees each.
This is true for both the values returned by cv::calibrationMatrixValues function and the rectified frames, in which relatively big external portions are cropped.
Is this behavior normal?
If it is, can I avoid this problem using the newer Fisheye camera model?
With Fisheye is it possible to know the measured FOVs? I cannot find something like fisheye::calibrationMatrixValues.JavelinWed, 08 Jul 2015 09:39:44 -0500http://answers.opencv.org/question/65809/How to find small circle/hole in image?http://answers.opencv.org/question/44830/how-to-find-small-circlehole-in-image/Hello...
I'm new on CV scene. My quiestion is how to find all holes in image. (pcb pads for example). The size is known and I want only correct sized holes.
How to approach to that? (probably using python)
Thanks
[pcb.jpg](/upfiles/14137355394868891.jpg)
bonnySun, 19 Oct 2014 11:19:43 -0500http://answers.opencv.org/question/44830/The coordinate system of pinhole camera modelhttp://answers.opencv.org/question/31470/the-coordinate-system-of-pinhole-camera-model/Recently, I have been studying the pinhole camera model for several days but I was confused with the model provided by OpenCV and "Multiple View geometry in computer vision" which is a famous textbook.
I know that the following photo is a simplified model which switches the position of the image plane and the camera frame. Basically,for better illustration and understanding and Taking consideration of the principal point (u0,v0), the relation between two frames is
x=f(X/Z)+u0 and
y=f(Y/Z)+vo.
![image description](/upfiles/1397050375379081.png)
However,I was really confused because normally the image coordinate is in the form of the 4th quadrant coordinate as the following one!
Could I directly substitute the (x,y) in the following definition to the above "equivalent" pinhole model which is not really persuasive?
![image description](/upfiles/13970504447802913.gif)
Besides, If an object is in the region (+X,+Y) quadrant in the camera coordinate (of course, Z>f), in the equivalent model, it should appear on the right-half plane of the image coordinate. However, such object in the image taken by a normal camera, it is supposed to be located on the left-half. Therefore, for me this model is not reasonable.
Finally, I tried to derive based on the original model as the following one.
![image description](/upfiles/13970504813232063.png)
The result is
x1=-f(X/Z) and
y1=-f(Y/Z). Then, I tried to find the relation between (x2,y2)-coordinate and the camera coordinate. The result is
x2=-f(X/Z)+u0 and
y2=-f(Y/Z)+vo.
Between (x3,y3)-coordinate and the camera coordinate, the result is
x3=-f(X/Z)+u0 and
y3=f(Y/Z)+vo.
no matter which coordinate system i tried, none of them is in the form of
x=f(X/Z)+u0 and
y=f(Y/Z)+vo, which are provided by some CV textbooks.
Besides, the projection results on (x2,y2)-coordinate or (x3,y3)-coordinate are also not reasonable because of the same reason- an object in the (+X,+Y,+Z) region in the camera coordinate should "appear" on the left-half plane of the image taken by a camera.
Could anyone indicate what I was misunderstood with and I will try to derive several times more and post the answer when someone else help me figure this issue out.
Thank you in advance!!
AlexAlexofNTUWed, 09 Apr 2014 08:37:31 -0500http://answers.opencv.org/question/31470/