Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Triangulation with Ground Plane as Origin

Hello, I am working on a project where I have two calibrated cameras (c1, c2) mounted to the ceiling of my lab and I want to triangulate points on objects that I place in the capture volume. I want my final output 3D points to be relative to the world origin that is placed on the ground plane (floor of the lab). I have some questions about my process and multiplying the necessary transformations. Here is what I have done so far...

To start I have captured an image, with c1, of the ChArUco board on the ground that will act as the origin of my "world". I detect corners ( cv::aruco::detectMarkers/ cv::aruco::interpolateCornersCharuco) in the image taken by c1 and obtain the transformation (with cv::projectPoints) from 3D world coordinates to 3D camera coordinates.

transform of board coords to camera 1 coords

I followed the same process of detecting corners on the ChArUco board with c2 (board in same position) and obtained the transformation that takes a point relative to the board origin to the camera origin...

transform of board coords to camera 2 coords

Q1. With the two transformations, and my calibrated intrinsic parameters, should I be able to pass these to cv::triangulatePoints to obtain 3D points that are relative to the ChArUco board origin?

Next, I was curious if I use cv::stereoCalibrate with my camera pair to obtain the transformation from camera 2 relative points to camera 1 relative points, could I combine this with the transform from camera 1 relative points to board relative points...to get a transform from camera 2 relative points to board relative points... After running cv::stereoCalibrate I obtain (where c1 is the origin camera that c2 transforms to)...

transform of camera 2 coords to camera 1 coords

Q2. Should I be able to combine transforms in the follow manner to get a transform that is the same (or very close) as my transform for board points to camera 2 points?

combined transforms

combined transforms approximation

I tried to do this and noticed that the transform obtained by detecting the ChArUco board corners is significantly different than the one obtained by combing the transformations. Should this work as I stated, or have I misunderstood something and done the math incorrectly? Here is output I get for the two methods (translation units are meters)...

Output from projectPoints

Output from combined transforms (projectPoints w/ c1 and board, and stereoCalibrate w/ c1 and c2)

Triangulation with Ground Plane as Origin

Hello, I am working on a project where I have two calibrated cameras (c1, c2) mounted to the ceiling of my lab and I want to triangulate points on objects that I place in the capture volume. I want my final output 3D points to be relative to the world origin that is placed on the ground plane (floor of the lab). I have some questions about my process and multiplying the necessary transformations. Here is what I have done so far...

To start I have captured an image, with c1, of the ChArUco board on the ground that will act as the origin of my "world". I detect corners ( cv::aruco::detectMarkers/ cv::aruco::interpolateCornersCharuco) in the image taken by c1 and obtain the transformation (with cv::projectPoints) from 3D world coordinates to 3D camera coordinates.

transform of board coords to camera 1 coords

I followed the same process of detecting corners on the ChArUco board with c2 (board in same position) and obtained the transformation that takes a point relative to the board origin to the camera origin...

transform of board coords to camera 2 coords

Q1. With the two transformations, and my calibrated intrinsic parameters, should I be able to pass these to cv::triangulatePoints to obtain 3D points that are relative to the ChArUco board origin?

Next, I was curious if I use cv::stereoCalibrate with my camera pair to obtain the transformation from camera 2 relative points to camera 1 relative points, could I combine this with the transform from camera 1 relative points to board relative points...to get a transform from camera 2 relative points to board relative points... After running cv::stereoCalibrate I obtain (where c1 is the origin camera that c2 transforms to)...

transform of camera 2 coords to camera 1 coords

Q2. Should I be able to combine transforms in the follow manner to get a transform that is the same (or very close) as my transform for board points to camera 2 points?

combined transforms

combined transforms approximation

I tried to do this and noticed that the transform obtained by detecting the ChArUco board corners is significantly different than the one obtained by combing the transformations. Should this work as I stated, or have I misunderstood something and done the math incorrectly? Here is output I get for the two methods (translation units are meters)...

Output from projectPoints

Output from combined transforms (projectPoints w/ c1 and board, and stereoCalibrate w/ c1 and c2)

Looking at the transform obtained from projectPoints the translation makes sense as in the physical setup the ChArUco board is about 4m away from the camera. This makes me think the combined transform doesn't really make sense...

Triangulation with Ground Plane as Origin

Hello, I am working on a project where I have two calibrated cameras (c1, c2) mounted to the ceiling of my lab and I want to triangulate points on objects that I place in the capture volume. I want my final output 3D points to be relative to the world origin that is placed on the ground plane (floor of the lab). I have some questions about my process and multiplying the necessary transformations. Here is what I have done so far...

To start I have captured an image, with c1, of the ChArUco board on the ground that will act as the origin of my "world". I detect corners ( cv::aruco::detectMarkers/ cv::aruco::interpolateCornersCharuco) in the image taken by c1 and obtain the transformation (with cv::projectPoints) from 3D world coordinates to 3D camera coordinates.

transform of board coords to camera 1 coords

I followed the same process of detecting corners on the ChArUco board with c2 (board in same position) and obtained the transformation that takes a point relative to the board origin to the camera origin...

transform of board coords to camera 2 coords

Q1. With the two transformations, and my calibrated intrinsic parameters, should I be able to pass these to cv::triangulatePoints to obtain 3D points that are relative to the ChArUco board origin?

Next, I was curious if I use cv::stereoCalibrate with my camera pair to obtain the transformation from camera 2 relative points to camera 1 relative points, could I combine this with the transform from camera 1 relative points to board relative points...to get a transform from camera 2 relative points to board relative points... After running cv::stereoCalibrate I obtain (where c1 is the origin camera that c2 transforms to)...

transform of camera 2 coords to camera 1 coords

Q2. Should I be able to combine transforms in the follow manner to get a transform that is the same (or very close) as my transform for board points to camera 2 points?

combined transforms

combined transforms approximation

I tried to do this and noticed that the transform obtained by detecting the ChArUco board corners is significantly different than the one obtained by combing the transformations. Should this work as I stated, or have I misunderstood something and done the math incorrectly? Here is output I get for the two methods (translation units are meters)...

Output from projectPoints

Output from combined transforms (projectPoints w/ c1 and board, and stereoCalibrate w/ c1 and c2)

Looking at the transform obtained from projectPoints the translation makes sense as in the physical setup the ChArUco board is about 4m away from the camera. This makes me think the combined transform doesn't really make sense...

edit/update: Adding raw data from projectPoints and stereoCalibrate:

Sorry for the delay. Here are the matrices that I obtained;

Note: Any time that a calibration board object is needed the board dimensions given are in meters. So scaling should remain the same between matrices.

board to camera 190 from projectPoints --> c1^M_board

[[ 0.99662517 0.05033606 -0.06484257 -0.88300593]

[-0.02915834 -0.52132771 -0.85285826 0.82721859]

[-0.07673376 0.85187071 -0.5181006 4.03620873]

[ 0. 0. 0. 1. ]]

board to camera 229 from projectPoints --> c2^M_board

[[ 0.9844968 -0.14832049 0.09363274 -0.7521725 ]

[ 0.01426749 -0.46433134 -0.88554664 1.10571043]

[ 0.17482132 0.87315373 -0.45501656 3.89971067]

[ 0. 0. 0. 1. ]]

camera 229 to camera 190 from stereoCalibrate --> c1^M_c2

[[ 0.96542194 0.05535236 0.2547481 -1.20694685]

[-0.03441951 0.99570816 -0.08591013 0.03888629]

[-0.25841009 0.07417122 0.96318371 0.04002158]

[ 0. 0. 0. 1. ]]

Here is a code snippet showing how I obtain the transformation matrix to ground:

// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// Attempt to refind more markers based on already detected markers
aruco::refineDetectedMarkers(image, charucoboard, corners, ids, rejected,
noArray(), noArray(), 10.f, 3.f, true, noArray(), detectorParams);

if(ids.size() < 1){
cerr << "No marker ID's found" << endl;
}

// interpolate charuco corners
Mat currentCharucoCorners, currentCharucoIds;
aruco::interpolateCornersCharuco(corners, ids, image, charucoboard,
currentCharucoCorners, currentCharucoIds);

if(currentCharucoCorners.rows < 6) {
cerr << "Not enough corners for calibration" << endl;
}

cout << "Corners Found: " << currentCharucoCorners.rows << endl;
cout << "Total Object Points: " << objPoints.size() << endl;

aruco::estimatePoseCharucoBoard(currentCharucoCorners, currentCharucoIds, charucoboard,
intrinsics.cameraMatrix, intrinsics.distCoeffs,
rvec, tvec, false);

Rodrigues(rvec, R);
cout << "Rotation Matrix: " << R << endl;
cout << "Translation Matrix: " << tvec << endl;
P = RTtoP(R, tvec);
cout << "Projection Matrix: " << P << endl;

Triangulation with Ground Plane as Origin

Hello, I am working on a project where I have two calibrated cameras (c1, c2) mounted to the ceiling of my lab and I want to triangulate points on objects that I place in the capture volume. I want my final output 3D points to be relative to the world origin that is placed on the ground plane (floor of the lab). I have some questions about my process and multiplying the necessary transformations. Here is what I have done so far...

To start I have captured an image, with c1, of the ChArUco board on the ground that will act as the origin of my "world". I detect corners ( cv::aruco::detectMarkers/ cv::aruco::interpolateCornersCharuco) in the image taken by c1 and obtain the transformation (with cv::projectPoints) from 3D world coordinates to 3D camera coordinates.

transform of board coords to camera 1 coords

I followed the same process of detecting corners on the ChArUco board with c2 (board in same position) and obtained the transformation that takes a point relative to the board origin to the camera origin...

transform of board coords to camera 2 coords

Q1. With the two transformations, and my calibrated intrinsic parameters, should I be able to pass these to cv::triangulatePoints to obtain 3D points that are relative to the ChArUco board origin?

Next, I was curious if I use cv::stereoCalibrate with my camera pair to obtain the transformation from camera 2 relative points to camera 1 relative points, could I combine this with the transform from camera 1 relative points to board relative points...to get a transform from camera 2 relative points to board relative points... After running cv::stereoCalibrate I obtain (where c1 is the origin camera that c2 transforms to)...

transform of camera 2 coords to camera 1 coords

Q2. Should I be able to combine transforms in the follow manner to get a transform that is the same (or very close) as my transform for board points to camera 2 points?

combined transforms

combined transforms approximation

I tried to do this and noticed that the transform obtained by detecting the ChArUco board corners is significantly different than the one obtained by combing the transformations. Should this work as I stated, or have I misunderstood something and done the math incorrectly? Here is output I get for the two methods (translation units are meters)...

Output from projectPoints

Output from combined transforms (projectPoints w/ c1 and board, and stereoCalibrate w/ c1 and c2)

Looking at the transform obtained from projectPoints the translation makes sense as in the physical setup the ChArUco board is about 4m away from the camera. This makes me think the combined transform doesn't really make sense...

edit/update: Adding raw data from projectPoints and stereoCalibrate:

Sorry for the delay. Going through my code I use estimatePoseCharucoBoard to get my transformation matric from board coords to camera, sorry about that! Here are the matrices that I obtained;

Note: Any time that a calibration board object is needed the board dimensions given are in meters. So scaling should remain the same between matrices.

board to camera 190 from projectPoints estimatePoseCharucoBoard --> c1^M_board

[[ 0.99662517 0.05033606 -0.06484257 -0.88300593]

[-0.02915834 -0.52132771 -0.85285826 0.82721859]

[-0.07673376 0.85187071 -0.5181006 4.03620873]

[ 0. 0. 0. 1. ]]

board to camera 229 from projectPoints estimatePoseCharucoBoard --> c2^M_board

[[ 0.9844968 -0.14832049 0.09363274 -0.7521725 ]

[ 0.01426749 -0.46433134 -0.88554664 1.10571043]

[ 0.17482132 0.87315373 -0.45501656 3.89971067]

[ 0. 0. 0. 1. ]]

camera 229 to camera 190 from stereoCalibrate --> c1^M_c2

[[ 0.96542194 0.05535236 0.2547481 -1.20694685]

[-0.03441951 0.99570816 -0.08591013 0.03888629]

[-0.25841009 0.07417122 0.96318371 0.04002158]

[ 0. 0. 0. 1. ]]

Here is a code snippet showing how I obtain the transformation matrix to ground:

// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// Attempt to refind more markers based on already detected markers
aruco::refineDetectedMarkers(image, charucoboard, corners, ids, rejected,
noArray(), noArray(), 10.f, 3.f, true, noArray(), detectorParams);

if(ids.size() < 1){
cerr << "No marker ID's found" << endl;
}

// interpolate charuco corners
Mat currentCharucoCorners, currentCharucoIds;
aruco::interpolateCornersCharuco(corners, ids, image, charucoboard,
currentCharucoCorners, currentCharucoIds);

if(currentCharucoCorners.rows < 6) {
cerr << "Not enough corners for calibration" << endl;
}

cout << "Corners Found: " << currentCharucoCorners.rows << endl;
cout << "Total Object Points: " << objPoints.size() << endl;

aruco::estimatePoseCharucoBoard(currentCharucoCorners, currentCharucoIds, charucoboard,
intrinsics.cameraMatrix, intrinsics.distCoeffs,
rvec, tvec, false);

Rodrigues(rvec, R);
cout << "Rotation Matrix: " << R << endl;
cout << "Translation Matrix: " << tvec << endl;
P = RTtoP(R, tvec);
cout << "Projection Matrix: " << P << endl;