2014-11-09 04:36:22 -0600 | received badge | ● Taxonomist |
2014-09-18 09:55:11 -0600 | commented question | In Zhang's calibration method, why it needs the planar pattern placed in at least two different orientation? Why each checkboard pattern can't be parallel to each other? |
2014-09-18 09:53:19 -0600 | asked a question | In Zhang's calibration method, why it needs the planar pattern placed in at least two different orientation? In Zhang's calibration method, why it needs the planar pattern placed in at least two different orientation? |
2014-09-18 03:13:00 -0600 | asked a question | homography matrix After calibration, I can know each checkboard rotation and translation matrix, how can I calculate the homography matrix between the one checkboard to the camera please? cheers! |
2014-09-17 09:33:41 -0600 | asked a question | calibration I think I completely lost now. I don't know why we need to do calibration. After we done calibration, we should be able to convert any pixel in images into millimetre in the world, that's why we do calibration, right? but using the opencv toolbox, we know the camera intrinsic parameters, and different calibrated chessboard position (rotation and translation),but if we put a new chessboard in, and get the image of it, and know the corners of it, but we still can't know , say, each corner points on the new chessboard in the real world, then what is the point to do the calibratioin? As my understanding, after the calibration done, we should know the projection matrix from world to image, and convert points in image to world. |
2014-09-15 08:56:00 -0600 | asked a question | trinagulatePoints opencv function In the triangulatePoints function, the projMatr1 is 3x4 projection matrix of the first camera with respect to the second camera, and projMatr2 is 3x4 projection matrix of the second camera with respect to the first camera, right? The function is as below: void triangulatePoints(InputArray projMatr1, InputArray projMatr2, InputArray projPoints1, InputArray projPoints2, OutputArray points4D) Parameters: |
2014-09-15 05:21:24 -0600 | received badge | ● Student (source) |
2014-09-15 05:12:33 -0600 | asked a question | stereo calibration and 3d reconstruction My purpose is to calculate the relative distance of two points on an object in 3D space. First I do stereo camera calibration to get the intrinsic parameters of the two cameras and the rotation and transition matrix of the two cameras. So for the two points on the object, I know the image position in both images,intrinsic parameters of the two cameras, and also the elemental/fundamental marix of the two cameras, but I don't know the extrinsic parameters of the object with respect of each cameras, how can I calculate the 3d coordinates on the object please? |
2014-09-03 11:09:03 -0600 | asked a question | extrinsic parameter of camera wrt world reference When I do the camera calibration, we can get the intrinsic parameter and extrinsic parameter, but i realize that using opencv library, the calibrated extrinsic parameter are the extrinsic parameter of each calibration image with respect to the camera, how can i know the extrinsic parameter of the camera with respect to the world reference? |
2014-08-27 10:28:24 -0600 | asked a question | distance in the real world I have done the camera calibration by using 20 check board image, and get the intrinsic parameters and also the extrinsic parameter matrix, say 20 rotation matrix and 20 translation matrix for each check board. My question is if I put a new object in the real world, and know two points's coordinates of the new object in the image, is there any chance to know the distance of the two points on the object in the real world? and how to do that? Cheers! |