Attention! This forum will be made read-only by Dec-20. Please migrate to https://forum.opencv.org. Most of existing active users should've received invitation by e-mail.
Ask Your Question
0

Get coordinates of the projection point depending of the angles do the Camera

asked 2017-10-02 11:00:57 -0500

Kathan gravatar image

updated 2017-10-24 09:29:30 -0500

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/ca... But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-10-02 18:04:31 -0500

Tetragramm gravatar image

You want to use the function projectPoints as it does everything you want.

You pass in the 3d world points you want projected, the camera matrix, the distortion matrix (if you have it), the rvec and tvec, which you have, and you get back a list of 2d points, projected onto the image plane. You can simply ignore the Jacobian and aspectRatio parameters for now. You don't need those for just projecting points onto the image.

edit flag offensive delete link more

Comments

Check my edit, Thanks for the help, but the points Im getting are wierd :X

Kathan gravatar imageKathan ( 2017-10-03 06:04:31 -0500 )edit
1

I think I need a picture to understand what you mean by "all in the same place".

One thought though, your get_camera_matrix. What does it return? You should have something like

?.?e? 0 width/2 0 ?.?e? height/2 0 0 1

?.?e? should be the focal length of the camera in pixels. You should ideally get this from calibrateCamera, but you can calculate it by reversing the first formula on THIS page, where h is width and height, for the first and second focal length values respectively.

Tetragramm gravatar imageTetragramm ( 2017-10-03 17:35:51 -0500 )edit

Thanks for the reply, check the edit. :)

Kathan gravatar imageKathan ( 2017-10-06 04:03:49 -0500 )edit
1

Ah, I see the problem. Your eulerAnglesToRotationMatrix is going into the wrong coordinate system. OpenCV uses a system where +X is to the right of the image, +Y is to the bottom, and +Z is straight out of the camera into the world.

So if you look at your camera location and world point, the Z value is the same, which means the point is located on the focal plane of the camera. Obviously, that doesn't project onto the plane at any pixel. You need to add a constant rotation to make the coordinate systems match.

Tetragramm gravatar imageTetragramm ( 2017-10-06 19:06:43 -0500 )edit

What do you mean with a constant rotation? and where would I put said constant? Check new Edit.

Kathan gravatar imageKathan ( 2017-10-09 03:30:40 -0500 )edit
1

Ah hah. projectPoints apparently doesn't like such large numbers. If you subtract the tvec from both it, and all the world points I get correct values.

By constant rotation, I mean that with the rvec and tvec you have, a point with location relative to tvec of [0,0,1000] is exactly in the center of the image. So your YPR values assume that the "front" of the camera is in the +X direction, but that's not true, it's in the +Z.

Tetragramm gravatar imageTetragramm ( 2017-10-10 20:51:31 -0500 )edit

Ok, what do you mean subtract tvec from both it and world points? Whats it? So, its to always make the camera a central point of the referencial? Another question thought, just to make sure, tvec is the positioning of the camera, right? I always read its the translation vector, but Im still so see any context to it :X

Kathan gravatar imageKathan ( 2017-10-11 06:18:42 -0500 )edit
1

I do mean "make the camera the central point of the referencial", and yes tvec is the position of the camera. You don't always have to do that, but apparently, projectPoints doesn't like really big tvec magnitudes.

Tetragramm gravatar imageTetragramm ( 2017-10-11 18:02:09 -0500 )edit

@Tetragramm Thanks for the insight! I managed to put it to work! I have one other question though. I have one point and if I rotate the camera 180 degrees the point will show up, but mirrored. I see no reason for this to happen. Check the Edit with a gif of it happening.

Kathan gravatar imageKathan ( 2017-10-24 09:27:28 -0500 )edit
1

Basically, it projects into the back of the focal plane. I think they assumed anyone would filter that out, so they didn't have to. Don't know why they didn't, it's not hard to remove.

Tetragramm gravatar imageTetragramm ( 2017-10-24 20:51:38 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower

Stats

Asked: 2017-10-02 11:00:57 -0500

Seen: 1,180 times

Last updated: Oct 24 '17