2013-07-28 08:34:07 -0600 | received badge | ● Scholar (source) |
2013-05-23 03:12:45 -0600 | asked a question | Check if my projection matrices are correct I used OpenCV methods stereoCalibrate and stereoRectify to get the projection matrices of my cameras - that means I used euclidean geometry. Now I want to check if these projection matrices are correct. I found out that I can't use for the calculation fundamental matrix, because it is included in other kind of geometry (projective, but I am not sure). I could transform that projective geometry to euclidean one. But isn't there any easier way to check them? I am new in computer vision, so I am sorry if I wrote something incorrectly or something isn't that clear to me. If that happened, please correct me. And if someone can explain me what is the difference between these geometries clearly (I was searching, but I think some information is missing there for me), I would be grateful for that :) |
2013-05-23 02:13:55 -0600 | commented answer | Camera calibration - getting bad rectified image I found out what was problem: when I was using stereoRectify() method from OpenCV I accidently set the alpha parameter to 0, that means that the rectified images is zoomed so just the valid pixels are visible. Also I had to experiment with the flags in method stereoCalibrate, because I don't know the parameters of my cameras. Thank you for your answer. |
2013-05-23 01:57:29 -0600 | received badge | ● Supporter (source) |
2013-05-22 03:29:51 -0600 | asked a question | Camera calibration - getting bad rectified image Hello, I am working on a project in which I have to calibrate my cameras, but the problem is that after rectifying the images by this sample I get bad results - the images are several times zoomed. What could be the problem? Could someone help me please? Here is link where are the left, right camera view and the rectified image. |
2013-05-21 09:57:28 -0600 | commented answer | Getting projection matrix thank you for your response. I would like to transform 3D point from real world to 2D image point using projection matrix. But first I wanted to get some knowledge about computer vision and epipolar geometry. I used OpenCV methods - stereoCalibrate, stereoRectify and findChessboardCorners - like I said and get the projection matrices of cameras. Now I am stucked cause after rectifying I get two images which are several times zoomed. I really don't know what does it mean. If my calibration was cirrect or not. |
2013-05-20 10:56:28 -0600 | received badge | ● Editor (source) |
2013-05-20 09:59:14 -0600 | commented question | Extrinsic.yml meanings hello, I am workin on same problem, could I ask you how you get these parameters from stereo vision? |
2013-05-20 06:05:07 -0600 | asked a question | Getting projection matrix at first I want to apologize for my bad English. I am really new in OpenCV and in virtual reality. I tried to find out the theory of image processing, but some points are missing there for me. I learned that projection matrix is matrix to transform 3D point to 2D. Am I right? Essential matrix gives me information about rotation between two cameras and fundamental matrix gives information about the relationship between pixel in one image with pixel in other image. The homography matrix relates coordinates of pixel in two image (is that correct?). What is the difference between fundamental and homography matrix? Do I need all these matrices to get projection matrix? I am new in these, so please if you can, try to explain me it simply. Thanks for your help. UPDATE: I found out that for I can somehow get projection matrix from fundamental and from homography matrix, but I dont' t really understand this algorithm. I read that with stereoCalibrate() method I can get the intrinsic and axrinsic parameters. Is these parameters enough to get projection matrix? What is the formula to get the projection matrix? If the first camera in co-ordinate system [0,0,0] is it necessery to get the projection matrix of the first camera? |