Ask Your Question
0

Get the 3D Point in another coordinate system

asked 2015-04-17 09:40:01 -0600

Hi there! I have a system which uses an RGB-D Camera and a marker. I can succesfully get the marker's origin point of coordinate system(center of marker) using an augmented reality library(aruco). Also,using the same camera, I managed to get the 3D position of my finger with respect to the camera world coordinate system(x',y',z'). Now what I want is to apply a transformation to the 3D position of the finger(x',y',z') so that I can get a new (x,y,z) with respect to the marker's coordinate system. Also it is worth mentioning that the camera's coordinate system is left-handed, while the coordinate system on the marker is right-handed. Here is a picture: image description

Can you tell me what I have to do?Any opencv functions?Any calculation I could do to get the required result in c++?

edit retag flag offensive close merge delete

Comments

Hi. If you found a solution to this problem, please can throw a code ?

GigaFlopsis gravatar imageGigaFlopsis ( 2016-07-29 03:54:37 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
2

answered 2015-04-19 04:12:34 -0600

sgarrido gravatar image

updated 2015-04-19 07:55:59 -0600

Hi,

ArUco provides the transformation from the marker's coordinate system to the camera's system.

As Eduardo said, you can transform the finger point to the marker's system just by applying the inverse transformation.

However, you say that your camera's coordinate system is left-handed, and the transformation provided by ArUco assumes a right-handed camera system (same as OpenCV): right-handed-camera

If your finger point is refered to a left-handed system, you have to transform the point to the ArUco right-handed camera system before applying the inverse marker transformation.

Considering your picture, you can accomplish this by simply negating the Y coordinate of the finger point.

edit flag offensive delete link more

Comments

Hi sgarrido.I can see in the aruco documentation that the Marker class does provide the R and T required to get the transformation from the marker's coordinate system to the camera's system.However I use a board of markers.Can I also get the same variables through the Board class?I use OpenGL for rendering.

marios.b gravatar imagemarios.b ( 2015-04-20 04:06:38 -0600 )edit

Yes, the method BoardDetector::getDetectedBoard() returns an object of type Board which includes the Rvec and Tvec of the detected board. There are also OpenGL integration examples for both, single markers and board.

sgarrido gravatar imagesgarrido ( 2015-04-20 04:21:46 -0600 )edit

Great! Thank you very much!

marios.b gravatar imagemarios.b ( 2015-04-20 06:18:16 -0600 )edit

Oh by the way.I have the 3D Point of the finger with respect to the depth camera,while the Color camera is the one that is used for marker tracking.What change should I make to correct this offset of depth/camera streams?

marios.b gravatar imagemarios.b ( 2015-04-20 08:22:58 -0600 )edit

You need the transformation between depth camera's system and rgb camera's system. Then you can transform points in the same way you do between the board's system and a camera's sytem.

sgarrido gravatar imagesgarrido ( 2015-04-20 09:42:15 -0600 )edit

Yes but since they are really close to each other,all I need is a x-offset right?

marios.b gravatar imagemarios.b ( 2015-04-21 04:27:45 -0600 )edit

sgarrido why is the website of aruco down?

marios.b gravatar imagemarios.b ( 2015-04-23 11:12:00 -0600 )edit

That depends on your cameras arrangment. About the website, it seems to be a problem with the University's hosting. I hope it gets fixed soon.

sgarrido gravatar imagesgarrido ( 2015-04-24 02:29:17 -0600 )edit

Rvec,Tvec show the marker position w.r.t the camera correct?

marios.b gravatar imagemarios.b ( 2015-04-27 09:53:16 -0600 )edit
6

answered 2015-04-17 11:08:11 -0600

Eduardo gravatar image

Hi,

The first thing I would do is to find the pose of the marker in relation to the camera frame:

Eq1

If you can display the marker frame in your image, you should somehow know the pose of the marker.

As you already have the coordinates of the finger in the camera frame, the coordinates of the finger in the marker frame could be obtained:

Eq2

Normally, aruco should give you the pose of the marker but I never used this library. The rest is matrix inverse and matrix multiplication.

edit flag offensive delete link more

Comments

Oh by the way.I have the 3D Point of the finger with respect to the depth camera,while the Color camera is the one that is used for marker tracking.What change should I make to correct this offset of depth/camera streams?

marios.b gravatar imagemarios.b ( 2015-04-20 08:32:44 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-04-17 09:40:01 -0600

Seen: 5,558 times

Last updated: Apr 19 '15