Revision history [back]

If I have understood the question correctly, what you want is to register the IR image with the color image. The depth image is aligned with the IR image.

In this case, you should be able to use the same principle than what should be used for the popular depth sensor APIs (Kinect API, librealsense, etc.):

• assuming the depth/IR and the color sensors are rigid and calibrated (known transformation between the two sensors)
• assuming the depth/IR and the color sensors are calibrated (known intrinsic parameters)
• for a [u,v] coordinate in the depth image frame, you should be able to compute the 3D coordinate [X, Y, Z] in the depth frame
• you can transform a 3D point in the depth frame to the color frame using the homogeneous transformation matrix ([R|t])
• you can then project the 3D point in the color frame using the color intrinsic parameters to get the correspond pixel color at the [u,v] depth/IR image coordinate

See for instance in the librealsense sample the code that performs the registration.

If I have understood the question correctly, what you want is to register the IR image with the color image. The depth image is aligned with the IR image.

In this case, you should be able to use the same principle than what should be used for the popular depth sensor APIs (Kinect API, librealsense, etc.):

• assuming the depth/IR and the color sensors are rigid and calibrated (known transformation between the two sensors)
• assuming the depth/IR and the color sensors are calibrated (known intrinsic parameters)
• for a [u,v] coordinate in the depth image frame, you should be able to compute the 3D coordinate [X, Y, Z] in the depth frame
• you can transform a 3D point in the depth frame to the color frame using the homogeneous transformation matrix ([R|t])
• you can then project the 3D point in the color frame using the color intrinsic parameters to get the correspond pixel color at the [u,v] depth/IR image coordinate

See for instance in the librealsense sample the code that performs the registration.

Edit:

The following code should demonstrate how to align a color image to a depth/ir image:

cv::Mat rvec_depth2color; //rvec_depth2color is a 3x1 rotation vector
cv::Rodrigues(R_depth2color, rvec_depth2color); //R_depth2color is a 3x3 rotation matrix

std::vector<cv::Point3d> src(1);
std::vector<cv::Point2d> dst;
cv::Mat color_image_aligned = cv::Mat::zeros(color_image.size(), color_image.type());
for (int i = 0; i < depth_image.rows; i++) {
for (int j = 0; j < depth_image.cols; j++) {
//Check if the current point is valid (correct depth)
if ((*pointcloud)(j,i).z > 0) { //pointcloud is a pcl::PointCloud<pcl::PointXYZ>::Ptr in this example
//XYZ 3D depth coordinate at the current [u,v] 2D depth image coordinate
cv::Point3d xyz_depth( (*pointcloud)(j,i).x, (*pointcloud)(j,i).y, (*pointcloud)(j,i).z );
src[0] = xyz_depth;

//Transform the 3D depth coordinate to the color frame and project it in the color image plane
cv::projectPoints(src, rvec_depth2color, t_depth2color, cam_color, dist_color, dst);
cv::Point2d uv_color = dst.front();

//Clamp pixel coordinate
int u_color = std::max( std::min(color_image.cols-1, cvRound(uv_color.x)), 0 );
int v_color = std::max( std::min(color_image.rows-1, cvRound(uv_color.y)), 0 );

//Copy pixel
color_image_aligned.at<cv::Vec3b>(i,j) = color_image.at<cv::Vec3b>(v_color, u_color);
}
}
}