Angle between wall and camera using pnpRansac and its precision

asked 2019-08-19 08:38:03 -0600

apgp gravatar image

updated 2019-08-20 04:37:13 -0600

I am looking to calculate the angle between my camera(ORBBEC Astra) and the wall. I am implementing this through SolvePnp using OpenCV. My procedure is as follows : I have a chessboard on the wall. I acquire the pixels of the chessboard and the corresponding 3D coordinates, input them to pnp. I am getting rvec and tvec. I go on to calculate the rotation vector and the pose of the camera.

I need the rotation angles obtained to be very precise. I use the following methods for finding the angle and subsequently the accuracy obtained. I run into some problems with my methods..

Here's what I've tried. I stuck my chessboard to a wall in front of the camera. And then stuck the chessboard to an adjacent wall. Here my angle is known, it's going to be 90°(so basically, I move the chessboard not the camera). I use SolvePnp to find tvec and rvec at these two positions. Here's where I get stuck. I get results I don't understand. My code's here

flag = cv.SOLVEPNP_ITERATIVE
rvec = np.zeros((3, 1))
tvec = np.zeros((3, 1))
_, rvec, tvec, inliers= cv.solvePnPRansac(object_points, img_points, mtx, dist)
Rt = cv.Rodrigues(rvec)
Rt = np.transpose(Rt[0])
sy = math.sqrt(Rt[0, 0] * Rt[0, 0] + Rt[1, 0] * Rt[1, 0])
singular = sy < 1e-6

# rotation matrix to Euler Angles
if not singular:
    x = math.atan2(Rt[2,1] , Rt[2,2])* (180 / np.pi)
    y = math.atan2(-Rt[2,0], sy)* (180 / np.pi)
    z = math.atan2(Rt[1,0], Rt[0,0])* (180 / np.pi)

else:
    x = math.atan2(-Rt[1,2], Rt[1,1])* (180 / np.pi)
    y = math.atan2(-Rt[2,0], sy)* (180 / np.pi)
    z = 0

R = np.array([x, y, z])
imagePoints, jacobian = 
v cv.projectPoints(object_points,rvec,tvec,mtx,dist)
pix_r = np.subtract(img_points,imagePoints)
cv.waitKey()

Here, the distance between the camera and wall is 4m. I get rotation matrices for each image using Solvepnp, but the relative angle between the walls is not 90. When chessboard is right in front, I get R = [7.37, 9.32, 0.37] degrees, yaw pitch roll. When chessboard is on an adjacent wall; I get R = [1.62, 2.98, -0.08]. My tvec seems pretty consistent with [46, -71, 3937] and [40,-61,4142] respectively. Using the cv.projectPoints I get and error of about 100 pixels at times.

Is there any other approach I could use for finding the angle between the wall and camera?

Note : the chessboard points are found using Canny edge detection and Hough. It detects some points in image other than the chessboard but I assume the outliers aren't taken into account using pnpRANSAC.

Thanks!

[Edit] Here are the images I'm using, these are depth images from the Orbbec image description image description

My matrix array and distortion array are : mtx = np.array([(576.254, 0, 313.154), (0, 577.558, 249.936), (0 ... (more)

edit retag flag offensive close merge delete

Comments

Can you post the camera matrix, distortion coefficients and the two images? I will see what I get.

Witek gravatar imageWitek ( 2019-08-19 14:00:44 -0600 )edit

Thanks for the reply, I edited my post to show the matrices and images!

apgp gravatar imageapgp ( 2019-08-20 03:10:44 -0600 )edit

How do you build the correspondences 3D points and 2D points? Add the corresponding code.

Did you try with findChessboardCorners?

My advice to debug is to draw the chessboard frame with drawFrameAxes. Also keep in mind that tvec and rvec are the translation and rotation that transform a 3D point expressed in the chessboard into the camera frame.

Eduardo gravatar imageEduardo ( 2019-08-20 07:47:15 -0600 )edit

Hey, I get all 3D and 2D points in a csv file using the Orbbec SDK for all pixels in the depth image. That is what I've used. findChessboardCorners doesn't work at 4 meters, I have to use other methods..

apgp gravatar imageapgp ( 2019-08-20 08:20:10 -0600 )edit

My suggestion:

  • forget about solvePnP, since you are using a depth camera everything is already in the camera frame
  • instead, perform a plane fitting in the region of interest
  • you can do it yourself, using for instance linear least squares or use PCL for that
Eduardo gravatar imageEduardo ( 2019-08-20 08:46:35 -0600 )edit

If I understand correctly, you are suggesting that I model the chosen region in the image as a plane and then find angle between this plane and the camera? Let's say I use RANSAC our least squares plane fit, how would I go ahead and find my angle?

apgp gravatar imageapgp ( 2019-08-20 10:05:17 -0600 )edit

You can describe a plane by a normal and a point on the plane.

Eduardo gravatar imageEduardo ( 2019-08-20 11:24:20 -0600 )edit

Thanks for the reply! Okay, so that I clearly understand, I fit a plane on an area on the wall, and get it's parameters ax+by+cz +d=0. I do intersection between the camera plane and the modeled plane? Your suggestion is to model the camera plane using normal vector and a point? I need the yaw, pitch and roll or a rotation matrix..

apgp gravatar imageapgp ( 2019-08-21 09:00:24 -0600 )edit

Oh I just found this, I think this is what you described to me in section 2.2 and 2.3 https://pdfs.semanticscholar.org/b878...

apgp gravatar imageapgp ( 2019-08-21 09:06:59 -0600 )edit