Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix:

getCameraMatrix(fovx, fovy, height, width)

I also have a rotationMatrix from the angles, given by:

eulerAnglesToRotationMatrix(pitch, yaw, roll)

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix:

getCameraMatrix(fovx, fovy, height, width)

I also have a rotationMatrix from the angles, given by:

eulerAnglesToRotationMatrix(pitch, yaw, roll)

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix:

getCameraMatrix(fovx, fovy, height, width)

I also have a rotationMatrix from the angles, given by:

eulerAnglesToRotationMatrix(pitch, yaw, roll)

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix:

getCameraMatrix(fovx, def get_camera_matrix(fovx, fovy, height, width)
width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll)
roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such wierd coords.

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix:Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such wierd coords.

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such wierd coords.

Camera Matrix:

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

[ [  6.24127977e+05]
  [  5.80574402e+06]
  [  5.00000000e+03] ]

Result Point [[ 4.29863536e+08 3.66315213e+09 1.00000000e+04]]

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such wierd coords.

Camera Matrix:

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

[ [  6.24127977e+05]
  [  5.80574402e+06]
  [  5.00000000e+03] ]

Result Point Point:

 [[  4.29863536e+08   3.66315213e+09   1.00000000e+04]]

1.00000000e+04]]

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such wierd coords.

Camera Matrix:

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

[ [  6.24127977e+05]
  [  5.80574402e+06]
  [  5.00000000e+03] ]
 [ [  6.30825248e+05]
   [  5.80612465e+06]
   [  5.00000000e+03]
   [  1.00000000e+00]]

Result Point:

 [[  4.29863536e+08   3.66315213e+09   1.00000000e+04]]

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such wierd weird coords.

Camera Matrix: Matrix (check the method on top):

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

 [ [  6.30825248e+05]
   [  5.80612465e+06]
   [  5.00000000e+03]
   [  1.00000000e+00]]

Result Point:

 [[  4.29863536e+08   3.66315213e+09   1.00000000e+04]]

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such weird coords.

Camera Matrix (check the method on top):

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

 [ [  6.30825248e+05]
   [  5.80612465e+06]
   [  5.00000000e+03]
   [  1.00000000e+00]]

Result Point:

 [[  4.29863536e+08   3.66315213e+09   1.00000000e+04]]

Image: image description

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such weird coords.

Camera Matrix (check the method on top):

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

 [ [  6.30825248e+05]
   [  5.80612465e+06]
   [  5.00000000e+03]
   [  1.00000000e+00]]

Result Point:

 [[  4.29863536e+08   3.66315213e+09   1.00000000e+04]]

Image: image description

As you can see not only has a odd behaviour it shows the points all in the same place, when Im in the middle of them.

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such weird coords.

Camera Matrix (check the method on top):

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

 [ [  6.30825248e+05]
   [  5.80612465e+06]
   [  5.00000000e+03]
   [  1.00000000e+00]]

Result Point:

 [[  4.29863536e+08   3.66315213e+09   1.00000000e+04]]

Image: Edit3: Image image description

As you can see not only has a odd behaviour it shows the points all in the same place, when Im in the middle of them.

Get coordinates of the projection point depending of the angles do the Camera

So basically I have a Camera positioning coords (x1, y1, y1) as well as the angles of the direction the camera is facing, and a point in the same space that I want to show in screen (x2, y2, z3).

I have a camera Matrix (with the values of the future camera im going to use):

def get_camera_matrix(fovx, fovy, height, width):
## FOVX is the horizontal FOV angle of the camera
## FOVY is the vertical FOV angle of the camera
x = width / 2
y = height / 2
fx = x / math.tan(fovx)
fy = y / math.tan(fovy)
return np.array([[fx, 0, x],
                 [0, fy, y],
                 [0, 0, 1]])

I also have a rotationMatrix from the angles, given by:

def eulerAnglesToRotationMatrix(pitch, yaw, roll):
# Calculates Rotation Matrix given euler angles.
r_x = np.array([[1, 0, 0],
                [0, math.cos(roll), -math.sin(roll)],
                [0, math.sin(roll), math.cos(roll)]])
r_y = np.array([[math.cos(pitch), 0, math.sin(pitch)],
                [0, 1, 0],
                [-math.sin(pitch), 0, math.cos(pitch)]])
r_z = np.array([[math.cos(yaw), -math.sin(yaw), 0],
                [math.sin(yaw), math.cos(yaw), 0],
                [0, 0, 1]])
return np.dot(r_z, np.dot(r_y, r_x))

But now I dont seem to understand what Im missing to get the final (x, y) point on screen. I looked up: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography But it I dont seem to understand where the angle of the camera enters. I want to move the angle of the camera and have it reflect on the position of the target.

This is what Im doing so far:

getPointInterestOnPixel(point_in_world,
       point_of_camera,
       eulerAnglesToRotationMatrix(roll, pitch, yaw),
       np.array([[0, 0, 0]]),
       eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2, -np.pi / 2),
       matrix)

def getPointInterestOnPixel(P_i_w, P_a_w, R_a_w, P_c_a, R_c_a, K):
       R_c_w = np.dot(R_a_w, R_c_a)
       #P_i_c = np.dot(R_c_w.T, (P_i_w.T - P_a_w.T - np.dot(R_a_w, P_c_a.T)))
       P_i_c = np.dot(R_c_w, (P_i_w.T - P_a_w.T))
       P_i_c_pixels = np.dot(K, P_i_c)
       P_i_c_pixels = np.divide(P_i_c_pixels, P_i_c_pixels[2])
       return P_i_c_pixels

Im going crazy, at this moment, I get the points in screen, they move, but if I do a 180 degree turn, the points turn upsidedown. I tried so many things its all getting so confused right now.

Please help? Thanks

EDIT: Used projectPoints as @Tetragramm sugested, but things got even wierder. All points are in the same place when they shouldnt, maybe Im missing something, kinda new to this. I just want to move my camera and have the points move as well in the correct place :(

points2d, jacobian = cv2.projectPoints(np.asarray(points3d),
    np.dot(hud.eulerAnglesToRotationMatrix(math.radians(pitch), math.radians(yaw), math.radians(roll)), hud.eulerAnglesToRotationMatrix(-np.pi / 2, -np.pi / 2,-np.pi / 2)),
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

or if I use Rodrigues:

dst, jacobian = cv2.Rodrigues(np.array([np.float64(pitch),np.float64(yaw),np.float64(roll)]))

points2d, jacobian3 = cv2.projectPoints(np.asarray(points3d),
    dst,
    np.array([np.float32(0), np.float32(0), np.float32(0)]),
    hud.get_camera_matrix(self._fovx, self._fovy, height, width),
    None)

And I get values like [[ -1.09674758e+05 -4.38290004e+04]]

Edit2: Im when using projectoPoints, im getting an array like [ [[x1,y1]]; [[x2,y2]]; ... ; [[xn;yn]] ], like the value before. I just cant seem to debug it to find the issue for such weird coords.

Camera Matrix (check the method on top):

[ [ 337.4337208     0.          640.   ]  
  [   0.          315.15617625  360.  ]  
  [   0.            0.            1.  ] ]

R:

 [ [  1.00000000e+00   0.00000000e+00   0.00000000e+00   6.24127977e+05]
   [  0.00000000e+00   1.00000000e+00   0.00000000e+00   5.80574402e+06]
   [  0.00000000e+00   0.00000000e+00   1.00000000e+00   5.00000000e+03]]

Point in World:

 [ [  6.30825248e+05]
   [  5.80612465e+06]
   [  5.00000000e+03]
   [  1.00000000e+00]]

Result Point:

 [[  4.29863536e+08   3.66315213e+09   1.00000000e+04]]

Edit3: Image image description

As you can see not only has a odd behaviour it shows the points all in the same place, when Im in the middle of them.

Edit4: Thanks to Tetragramm I managed to put it to work, but something weird is still happening, as you can seein the image below, if I have a point in front of me and I rotate the camera 180 the same point will show up mirrored.

image description