Ask Your Question
0

ToF and RGB stereo vision system

asked 2018-04-04 06:05:50 -0600

electronic_stud gravatar image

Hello!

I have a hybrid stereo system which consists of RGB camera and ToF camera. Both of the camera resolutions are set to 320x240. Right now I have one variable which consists of 76800x3 array of points with [x,y,z] values image description

and I have a color image 240x320x3 of the scene which looks something like image description

as you can see, the cameras are a bit missaligned. But i have done stereo vision calibration and have obtained RGB and ToF cam intrinsic and ToF extrinsic parameters in relation to RGB cam. The problem is how to align these two scenes in Matlab properly usign OpenCV and how to get a colorful point cloud in the result? Any infromation will be much appreciated.

Regards, Andris

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
2

answered 2018-04-04 07:01:56 -0600

Eduardo gravatar image

The maths should be the following.

Given the color K_c and depth K_d intrinsic parameters:

eq1

Given the homogeneous transformation between the color and depth frames:

eq2

The previous matrix allows transforming a 3D point expressed in one frame to another frame:

eq3

To get the corresponding RGB color for a given 3D point:


(Skip this as you already have the [X,Y,Z] coordinates. Sometimes, you only have the depth map.)

Get the full 3D coordinate from the depth value Z_d (using the depth intrinsic parameters) for a given [u,v] coordinate in the depth map (here without taking into account the distortion):

eq4

eq5


Transform the 3D point expressed in the depth frame to the color frame:

eq6

Project the 3D point expressed in the color frame into the image plane to get the corresponding [u_c, v_c] coordinate:

eq7


Practically, you should be able to achieve this using projectPoints(). More details about the camera frame here.

edit flag offensive delete link more

Comments

Thak you for the fast response! I will try projectPoints() function right away.

electronic_stud gravatar imageelectronic_stud ( 2018-04-04 07:14:18 -0600 )edit

The projectPoints() function outputs 76800x2 matrix, how could I transform that to point cloud? Is it even possible? Or how can i represnt the imagePoints output? because its not an image nor point cloud

electronic_stud gravatar imageelectronic_stud ( 2018-04-04 07:29:28 -0600 )edit

Code should be something similar to this:

vector<Point2f> points2d;
vector<Point3f> points3d;
projectPoints(points3d, rvec, tvec, intrinsics, distortion, points2d);

You will get a list of 2D image points in points2d. For each 3D points, you should be able to access the color image using the coordinates for the corresponding points2d.

Eduardo gravatar imageEduardo ( 2018-04-04 10:02:12 -0600 )edit

Okay, I got the points2d array. Its a matrix of 76800x2. Now how can I combine the RGB image with points2d? Is it possible to make it work the other way arround to get a colorful 3D point cloud as a result of the 3D point and RGB infrmation combination?

electronic_stud gravatar imageelectronic_stud ( 2018-04-05 05:25:00 -0600 )edit

Hi Eduaro! I'm trying to reach my goal only using the maths you provided. I'm confused in which step the color information is assigned to a point cloud. In the provided formulas I see only point coordinates getting transformed from one plane to another but what about color information. You mention "To get the corresponding RGB color for a given 3D point:" I don't really understand where the corresponding color to the 3D point is calculated. Could you please explain a bit more? And what about Zc? It should be 0 or 1 in RGB image, because it dosent have any Z value only color information. I believe it should be 0, am I right? Thank you for your help!

electronic_stud gravatar imageelectronic_stud ( 2018-04-05 08:54:13 -0600 )edit

What the maths do are:

  • transform a 3D point expressed in the depth frame (a 3D point in the pointcloud) into the color camera frame
  • project the 3D point into the color image (this is where you get the RGB value)

I invite you to read this course: Lecture 2 Camera Models, Professor Silvio Savarese and refer to the whole course: CS231A · Computer Vision: from 3D reconstruction to recognition to have a whole overview of the topic.

Eduardo gravatar imageEduardo ( 2018-04-06 04:59:22 -0600 )edit

Okay, thank you. Gonna come back to this thread if something will still be unclear after reading it!

electronic_stud gravatar imageelectronic_stud ( 2018-04-06 05:45:02 -0600 )edit

For example if I have done stereo calibration and the Camera1 was the ToF camera, in projectPoints which camera intrinsic matrix should I use, the ToF or RGB camera. The same is about the distortion coeff?

electronic_stud gravatar imageelectronic_stud ( 2018-04-11 05:15:21 -0600 )edit

Follow this path:

  • from a 3D point expressed in the ToF camera frame
  • transform the coordinate to the RGB camera frame
  • project the 3D coordinate expressed in the RGB camera frame to the RGB image plane
Eduardo gravatar imageEduardo ( 2018-04-12 08:16:21 -0600 )edit
1

I managed to do the transformation and to get the corresponding colors. Thank you very much Eduardo!

electronic_stud gravatar imageelectronic_stud ( 2018-04-16 07:27:58 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-04-04 06:05:50 -0600

Seen: 1,562 times

Last updated: Apr 04 '18