Matching RGB image with point cloud.

I have two different sensors,one for capturing an RGB image (from an Intel RealSense SR300), and one for giving me a 3D Point Cloud(from a DUO MC stereo camera). How can I integrate the two? Is it possible to match the pixels of the RGB image to the points in the Point Cloud? I need a separate depth sensor because the SR300 does not work in the presence of ambient light and the DUO MC only gives a monochrome 2D image.

If I take a grayscale of the RGB image(from SR300) and use some sort of matching with the monochrome 2D image(from the DUO MC), could that work? Any alternatives would be super helpful as well.

edit retag close merge delete

Sort by ยป oldest newest most voted

Yes it can be done if you know the transformation between the color and the depth frame.

A 3D point expressed in the depth frame can be transformed into the color frame using the homogeneous transformation between the color and the depth frame (can be estimated by calibration, the color and the depth frame must be static otherwise the calibration must be redo).

Then, you can project the 3D point expressed in the color frame into the image plane using the color intrinsic parameters to get the RGB values.

The relevant equations below.

• to transform a 3D point expressed in the depth frame into the color frame using the known transformation (rotation + translation) between the color and the deph frames:

• to project a 3D point expressed in the color frame into the image plane using the color intrinsic parameters:

The color pixel corresponding to the 3D point from the pointcloud can then be obtained (here without taking into account the distortion):

more

Thanks for the answer. Do you have any links to help me with this? I'm pretty new to this, and while I partially understand the concept of what you're saying, I have little idea on how to go about implementing this.

( 2018-01-08 07:33:13 -0500 )edit

So, now I get a monochrome image and a corresponding point cloud from the DUO camera. I thought it would be easier to match the RGB image to the monochrome image, and thereby the point cloud. Am I missing something?

( 2018-01-08 07:35:07 -0500 )edit

It is the same thing, you will need the transformation between the RGB frame and the gray frame. A simple 2D matching / registration of the color and grey images will not work with 3D scene (non planar scene).

Because with a single camera the depth cannot be reconstructed and you need a stereo sensor.

( 2018-01-08 17:53:59 -0500 )edit