1 | initial version |
So, this is how it is done.
2 | No.2 Revision |
So, this is how it is done.
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world's coordinate.
Use ROS facility of TF to get the camera pose.
3 | No.3 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world's coordinate.
Use ROS facility of TF to get the camera pose.pose. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's doable.
4 | No.4 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world's coordinate. world frame. It now became a game of matrices, which can be solved by ROS. (Another framework, not directly related to OpenCV, but nevertheless it can be used).
Use ROS facility of TF to get the camera pose. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's doable.
5 | No.5 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world frame. frame. It now became a game of matrices, which can be solved by ROS. (Another framework, not directly related to OpenCV, but nevertheless it can be used).
Use ROS facility of TF to get the camera pose. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's doable.
6 | No.6 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world frame. It now became a game of matrices, which can be solved by ROS. (Another framework, not directly related to OpenCV, but nevertheless it can be used).
Use ROS facility of TF to get the camera pose. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's doable.
7 | No.7 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a
fixed location in the world (work station, desk, whatever you use,
just mark it and make sure the board
doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever).whatever). This means you measure X and Y precisely from your world center to the board's center (or corner point, whichever reference point for board you want to use, I took the top left square's center).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world frame. It now became a game of matrices, which can be solved by ROS. (Another framework, not directly related to OpenCV, but nevertheless it can be used).
Use ROS facility of TF to get the camera pose. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's doable. I'll update the answer with a better explanation and perhaps the code in a while.
8 | No.8 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever). This means you measure X and Y precisely from your world center to the board's center (or corner point, whichever reference point for board you want to use, I took the top left square's center).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world frame. It now became a game of matrices, which can be solved by ROS. (Another framework, not directly related to OpenCV, but nevertheless it can be used).
Use ROS facility of TF to get the camera pose. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's doable. I'll update the answer with a better explanation and perhaps the code in a while.
EDIT: So here is the pseudo code of the entire story:
UNIT_SIZE = 15 # one square on checkerboard has this much length in mm
world_point_pose = np.asarray(0.55,0.32, 0)
transform_from_board_to_world = world_point_pose # i.e. the measured point, if the coordinate systems are aligned (=rotation)!
board_pose_in_camera_frame = matrix_from_vecs(camera.tvecs.flatten() * UNIT_SIZE, camera.rvecs.flatten())
pose_of_camera_in_board_frame = np.linalg.inv(board_pose_in_basler_frame)
transform_from_camera_to_board = pose_of_camera_in_board_frame
transform_from_board_to_camera= np.linalg.inv(transform_from_camera_to_board)
camera_in_world = np.dot(transform_from_board_to_world, transform_from_camera_to_board) # this is what we want
9 | No.9 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever). This means you measure X and Y precisely from your world center to the board's center (or corner point, whichever reference point for board you want to use, I took the top left square's center).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world frame. It now became a game of matrices, which can be solved by ROS. (Another framework, not directly related to OpenCV, but nevertheless it can be used).
Use ROS facility of TF to get the camera pose. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's doable. I'll update the answer with a better explanation and perhaps the code in a while.
EDIT: So here is the pseudo code of the entire story:
UNIT_SIZE = 15 # one square on checkerboard has this much length in mm
world_point_pose = np.asarray(0.55,0.32, 0)
transform_from_board_to_world = world_point_pose # i.e. the measured point, if the coordinate systems are aligned (=rotation)!
board_pose_in_camera_frame = matrix_from_vecs(camera.tvecs.flatten() * UNIT_SIZE, camera.rvecs.flatten())
pose_of_camera_in_board_frame = np.linalg.inv(board_pose_in_basler_frame)
np.linalg.inv(board_pose_in_camera_frame)
transform_from_camera_to_board = pose_of_camera_in_board_frame
transform_from_board_to_camera= np.linalg.inv(transform_from_camera_to_board)
camera_in_world = np.dot(transform_from_board_to_world, transform_from_camera_to_board) # this is what we want
10 | No.10 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever). This means you measure X and Y precisely from your world center to the board's center (or corner point, whichever reference point for board you want to use, I took the top left square's center).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world frame. It now became a game of matrices, which can be solved by ROS. (Another solved. One can use ROS (another framework, not directly related to OpenCV, but nevertheless it can be used).
Use used) or simply mathematical calculations (see below edit).
Using ROS facility of TF to get the camera pose. pose is quite straightforward, but mathematically speaking, it is also fine to just do the matrix multiplications. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's totally doable. I'll update the answer with a better explanation and perhaps the add a psuedo code in a while.
EDIT: So here is the pseudo code of the entire story:
UNIT_SIZE = 15 # one square on checkerboard has this much length in mm
world_point_pose = np.asarray(0.55,0.32, 0)
transform_from_board_to_world = world_point_pose # i.e. the measured point, if the coordinate systems are aligned (=rotation)!
board_pose_in_camera_frame = matrix_from_vecs(camera.tvecs.flatten() * UNIT_SIZE, camera.rvecs.flatten())
pose_of_camera_in_board_frame = np.linalg.inv(board_pose_in_camera_frame)
transform_from_camera_to_board = pose_of_camera_in_board_frame
transform_from_board_to_camera= np.linalg.inv(transform_from_camera_to_board)
camera_in_world = np.dot(transform_from_board_to_world, transform_from_camera_to_board) # this is what we want
11 | No.11 Revision |
So, this is how it is done.
First, you calibrate your camera intrinsically. Get the matrices required for the calculations.calculations. (D and M)
Then, you place your pattern on a fixed location in the world (work station, desk, whatever you use, just mark it and make sure the board doesn't move at all. Note the world coordinates of the pattern (whichever reference point you have. Robot base, desk corner, whatever). This means you measure X and Y precisely from your world center to the board's center (or corner point, whichever reference point for board you want to use, I took the top left square's center).
Use the code given by OpenCV, get the pose of the checkerboard in camera frame. (rvecs and tvecs) Now you know the board's location in camera and in world frame. It now became a game of matrices, which can be solved. One can use ROS (another framework, not directly related to OpenCV, but nevertheless it can be used) or simply mathematical calculations (see below edit).
Using ROS facility of TF to get the camera pose is quite straightforward, but mathematically speaking, it is also fine to just do the matrix multiplications. So in short, OpenCV itself doesn't give you the camera pose, but you can use checkerboard's pose to estimate the camera pose. It's totally doable. I'll update the answer with a better explanation and perhaps add a psuedo code in a while.
EDIT: So here is the pseudo code of the entire story:
UNIT_SIZE = 15 # one square on checkerboard has this much length in mm
world_point_pose = np.asarray(0.55,0.32, 0)
transform_from_board_to_world = world_point_pose # i.e. the measured point, if the coordinate systems are aligned (=rotation)!
board_pose_in_camera_frame = matrix_from_vecs(camera.tvecs.flatten() * UNIT_SIZE, camera.rvecs.flatten())
pose_of_camera_in_board_frame = np.linalg.inv(board_pose_in_camera_frame)
transform_from_camera_to_board = pose_of_camera_in_board_frame
transform_from_board_to_camera= np.linalg.inv(transform_from_camera_to_board)
camera_in_world = np.dot(transform_from_board_to_world, transform_from_camera_to_board) # this is what we want