# Get the 3D Point in another coordinate system

Hi there! I have a system which uses an RGB-D Camera and a marker. I can succesfully get the marker's origin point of coordinate system(center of marker) using an augmented reality library(aruco). Also,using the same camera, I managed to get the 3D position of my finger with respect to the camera world coordinate system(x',y',z'). Now what I want is to apply a transformation to the 3D position of the finger(x',y',z') so that I can get a new (x,y,z) with respect to the marker's coordinate system. Also it is worth mentioning that the camera's coordinate system is left-handed, while the coordinate system on the marker is right-handed. Here is a picture:

Can you tell me what I have to do?Any opencv functions?Any calculation I could do to get the required result in c++?

edit retag close merge delete

Hi. If you found a solution to this problem, please can throw a code ?

( 2016-07-29 03:54:37 -0600 )edit

Sort by ยป oldest newest most voted

Hi,

ArUco provides the transformation from the marker's coordinate system to the camera's system.

As Eduardo said, you can transform the finger point to the marker's system just by applying the inverse transformation.

However, you say that your camera's coordinate system is left-handed, and the transformation provided by ArUco assumes a right-handed camera system (same as OpenCV):

If your finger point is refered to a left-handed system, you have to transform the point to the ArUco right-handed camera system before applying the inverse marker transformation.

Considering your picture, you can accomplish this by simply negating the Y coordinate of the finger point.

more

Hi sgarrido.I can see in the aruco documentation that the Marker class does provide the R and T required to get the transformation from the marker's coordinate system to the camera's system.However I use a board of markers.Can I also get the same variables through the Board class?I use OpenGL for rendering.

( 2015-04-20 04:06:38 -0600 )edit

Yes, the method BoardDetector::getDetectedBoard() returns an object of type Board which includes the Rvec and Tvec of the detected board. There are also OpenGL integration examples for both, single markers and board.

( 2015-04-20 04:21:46 -0600 )edit

Great! Thank you very much!

( 2015-04-20 06:18:16 -0600 )edit

Oh by the way.I have the 3D Point of the finger with respect to the depth camera,while the Color camera is the one that is used for marker tracking.What change should I make to correct this offset of depth/camera streams?

( 2015-04-20 08:22:58 -0600 )edit

You need the transformation between depth camera's system and rgb camera's system. Then you can transform points in the same way you do between the board's system and a camera's sytem.

( 2015-04-20 09:42:15 -0600 )edit

Yes but since they are really close to each other,all I need is a x-offset right?

( 2015-04-21 04:27:45 -0600 )edit

sgarrido why is the website of aruco down?

( 2015-04-23 11:12:00 -0600 )edit

That depends on your cameras arrangment. About the website, it seems to be a problem with the University's hosting. I hope it gets fixed soon.

( 2015-04-24 02:29:17 -0600 )edit

Rvec,Tvec show the marker position w.r.t the camera correct?

( 2015-04-27 09:53:16 -0600 )edit

Hi,

The first thing I would do is to find the pose of the marker in relation to the camera frame:

If you can display the marker frame in your image, you should somehow know the pose of the marker.

As you already have the coordinates of the finger in the camera frame, the coordinates of the finger in the marker frame could be obtained:

Normally, aruco should give you the pose of the marker but I never used this library. The rest is matrix inverse and matrix multiplication.

more

Oh by the way.I have the 3D Point of the finger with respect to the depth camera,while the Color camera is the one that is used for marker tracking.What change should I make to correct this offset of depth/camera streams?

( 2015-04-20 08:32:44 -0600 )edit

Official site

GitHub

Wiki

Documentation