Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Proceedure for obtaining/updating camera pose for moving camera

I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed.

0) Initialize pos_R=Mat.eye(3) and pos_T=mat.zeros(3,1). 1) Store the first image in Mat img_train and use ORB detector, BRISK extractor to obtain keypoints / features 2) Store the next video image in Mat img_query, use ORB/BRISK with BF_HG radius matcher 3) Find distances between matches and keep only distances below threshold 4) On the next step, get the change in rotation and translation between the last two frames. Use either 4a or 4b 4a) Use findEssentialMat to obtain the essential matrix and recoverPose to obtain camera_R, camera_T 4b) Use solvePnP (or solvePnPRansac) to obtain camera_R, camera_T 5) Update pos_R and pos_T using gemm. pos_R = camera_R * pos_R. pos_T = pos_R * camera_T + pos_T 6) Convert to angles for display using Rodrigues 7) store query image, keypoints, and features into train image, keypoints, and features 8) Repeat starting from step 2

If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look.

As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing?

Proceedure for obtaining/updating camera pose for moving camera

I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed.

0) Initialize pos_R=Mat.eye(3) and pos_T=mat.zeros(3,1). 
1) Store the first image in Mat img_train and use ORB detector, BRISK extractor to obtain keypoints / features
2) Store the next video image in Mat img_query, use ORB/BRISK with BF_HG radius matcher
3) Find distances between matches and keep only distances below threshold
4) On the next step, get the change in rotation and translation between the last two frames.  Use either 4a or 4b
4a) Use findEssentialMat to obtain the essential matrix and recoverPose to obtain camera_R, camera_T
4b) Use solvePnP (or solvePnPRansac) to obtain camera_R, camera_T
5) Update pos_R and pos_T using gemm.   pos_R = camera_R * pos_R.  pos_T = pos_R * camera_T + pos_T
6) Convert to angles for display using Rodrigues
7) store query image, keypoints, and features into train image, keypoints, and features
8) Repeat starting from step 2

2

If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look.

As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing?

Proceedure for obtaining/updating camera pose for moving camera

I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed.

0) Initialize pos_R=Mat.eye(3) and pos_T=mat.zeros(3,1). 
1) Store the first image in Mat img_train and use ORB detector, BRISK extractor to obtain keypoints / features
2) Store the next video image in Mat img_query, use ORB/BRISK with BF_HG radius matcher
3) Find distances between matches and keep only distances below threshold
4) On the next step, get Obtain the change in rotation and translation between the last two frames.  Use either 4a or 4b
4a) Use findEssentialMat to obtain the essential matrix and from camera focal, principle point, and matching point.  Then use recoverPose to obtain camera_R, camera_T
4b) Use solvePnP (or solvePnPRansac) to obtain camera_R, camera_T
5) Update pos_R and pos_T using gemm.   pos_R = camera_R * pos_R.  pos_T = pos_R * camera_T + pos_T
6) Convert to angles for display using Rodrigues
7) store query image, keypoints, and features into train image, keypoints, and features
8) Repeat starting from step 2

If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look.

As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing?

Proceedure for obtaining/updating camera pose for moving camera

I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed.

0) Initialize pos_R=Mat.eye(3) and pos_T=mat.zeros(3,1). 
1) Store the first image in Mat img_train and use ORB detector, BRISK extractor to obtain keypoints / features
2) Store the next video image in Mat img_query, use ORB/BRISK with BF_HG radius matcher
3) Find distances between keypoint matches and keep only distances below threshold
4) For the first frame, set it as key frame.  For subsequent frames update the keyframe if the number of keypoints falls to less than a required number (30) or if the percent of keypoint matches falls below a required percentage (50).
5) Obtain the change in rotation and translation between the current and the last two frames. key frame.  Use findEssentialMat to obtain the essential matrix from camera focal, principle point, and matching point.  Then use recoverPose to obtain camera_R, camera_T
5) 6) Update pos_R and pos_T using gemm.   pos_R = camera_R * pos_R. keyFrameR_R.  pos_T = pos_R keyFrame_R * camera_T + pos_T
6) keyFrame_T
7) Convert to camera angles for display using Rodrigues
7) 8) store query image, keypoints, and features into train image, keypoints, and features
8) 9) Repeat starting from step 2

If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look.

As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing?

I've ran the procedure on some video on a model helicopter, but I don't know the truth values. The video came from a onboard cam on youtube. I can see some problems. x, y, z are not in an earth system with x forward, y left, and z down. But instead it seems that earth z is the distance from the z axis, because it starts and ends on the z axis, and returns to the z axis at times that appear to correspond to the vehicle hitting the ground.

The rotation / translation are in the current camera x/y/z frames, which I think are camera up, camera right, camera forward directions. To get to earth axis (X east, Y north, Z up) would require some conversion.

Edited: Added key frame and comment about earth axis and results.

Proceedure for obtaining/updating camera pose for moving camera

I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed.

0) Initialize pos_R=Mat.eye(3) and pos_T=mat.zeros(3,1). 
1) Store the first image in Mat img_train and use ORB detector, BRISK extractor to obtain keypoints / features
2) Store the next video image in Mat img_query, use ORB/BRISK with BF_HG radius matcher
3) Find distances between keypoint matches and keep only distances below threshold
4) For the first frame, set it as key frame.  For subsequent frames update the keyframe if the number of keypoints falls to less than a required number (30) or if the percent of keypoint matches falls below a required percentage (50).
5) Obtain the change in rotation and translation between the current and the last key frame.  Use findEssentialMat to obtain the essential matrix from camera focal, principle point, and matching point.  Then use recoverPose to obtain camera_R, camera_T
6) Update pos_R and pos_T using gemm.   pos_R = camera_R * keyFrameR_R.  pos_T = keyFrame_R * camera_T + keyFrame_T
7) Convert to camera angles for display using Rodrigues
8) store query image, keypoints, and features into train image, keypoints, and features
9) Repeat starting from step 2

If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look.

As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing?

I've ran the procedure on some video on a model helicopter, but I don't know the truth values. The video came from a onboard cam on youtube. I can see some problems. x, y, z are not in an earth system (X east, Y north, Z up) but instead may be in a system with x forward, up, y left, right, and z down. But instead forward. From a 3d graph of the x/y/z results it seems appears that earth z is the distance from the z axis, because it the helicopter starts and ends on the z axis, and returns to the z axis at times that appear to may correspond to the vehicle hitting the ground.

The rotation / translation are in the current camera x/y/z frames, which I think are camera up, camera right, camera forward directions. To get to earth axis (X east, Y north, Z up) would require some conversion.

Edited: Edit 1: Added key frame and comment about earth axis and results.

Proceedure for obtaining/updating camera pose for moving camera

I would like to determine the translation and rotation of a single monocular camera (android phone) mounted on a micro helicopter. The camera has been calibrated with the chess board, so the camera matrix and distortion parameters are available. Is the following the correct procedure? The camera is moving, the background is fixed.

0) Initialize pos_R=Mat.eye(3) and pos_T=mat.zeros(3,1). 
1) Store the first image in Mat img_train and use ORB detector, BRISK extractor to obtain keypoints / features
2) Store the next video image in Mat img_query, use ORB/BRISK with BF_HG radius matcher
3) Find distances between keypoint matches and keep only distances below threshold
4) For the first frame, set it as key frame.  For subsequent frames update the keyframe if the number of keypoints falls to less than a required number (30) or if the percent of keypoint matches falls below a required percentage (50).
5) Obtain the change in rotation and translation between the current and the last key frame.  Use findEssentialMat to obtain the essential matrix from camera focal, principle point, and matching point.  Then use recoverPose to obtain camera_R, camera_T
6) Update pos_R and pos_T using gemm.   pos_R = camera_R * keyFrameR_R.  pos_T = keyFrame_R * camera_T + keyFrame_T
7) Convert to camera angles for display using Rodrigues
8) store query image, keypoints, and features into train image, keypoints, and features
9) Repeat starting from step 2

If we can get this working on android, we'll test it by moving the camera 1 foot forward/aft, left/right, up/down. Then rotate camera about vertical axis by 30, 60 deg, and pitch the camera by 15 deg, to see how the results look.

As the project progresses, INS will be integrated and Kalman filter implemented. Is there any video of indoor flight available for testing?

I've ran the procedure on some a video on from a model helicopter, helicopter, but I don't know the truth values. The video came from a onboard cam on youtube. I can see some problems. x, y, z are not in an earth system (X east, Y north, Z up) but instead may be in a system with x up, y right, and z forward. From a 3d graph of the x/y/z results it appears that earth z is the distance from the z axis, because the helicopter starts and ends on the z axis, and returns to the z axis at times that may correspond to the vehicle hitting the ground.

The rotation / translation are in the current camera x/y/z frames, which I think are camera up, camera right, camera forward directions. To get to earth axis (X east, Y north, Z up) would require some conversion.

Edit 1: Added key frame and comment about earth axis and results.results from sample video.