Ask Your Question
0

Kinect one stereo calibration and overlay rgb with depth map

asked 2015-12-07 04:29:03 -0600

theodore gravatar image

updated 2015-12-07 12:13:03 -0600

Lastly I am working with the kinect one sensor, which comes with two sensors an RGB camera with resolution of 1920x1080 and an IR/Depth sensor with resolution 512x424. So far I managed to acquire the view from the kinect sensors using the kinect sdk v2.0 with opencv. However, now I would like to combine the rgb view with the dept view in order to create an rgbd image. Since I did not have any experience before with something like that I made a small research and I found out that I need first to calibrate my two sensors and afterwards to try to map the two views. It is mentioned that the best approach is to:

  1. calibrate individually the two cameras and extract the intrinsic/cameraMatrix and distortions coefficients(distCoeffs) values by using the calibrate() function for each view.
  2. and then having the above information pass it to the stereoCalibrate() function in order to extract the rotation(R) and translation(T) matrices, which are needed for the remapping.
  3. apply the remapping (not sure yet how to do that though still in the search).

My progress so far is the following: I have implemented bullet 1 without problem using the an asymmetric circles chessboard and the calibrate() funciton with quite nice results. The reprojection error rms for the rgb sensor is ~0.3 and for the ir ~0.1 (it should be between 0.1 - 1, with best something <0.5). However, if I pass the extracted intrinsic and distCoeffs values from the two sensors to the stereoCalibrate() function the reprojection error is never below 1. The best that I managed was ~1.2 (I guess it should be also below 1 and in the best case <0.5) and epipolar error around ~6.0, no matter which flag I used. Basically in the case of predefined intrinsic and distCoeffs values they suggest CV_CALIB_USE_INTRINSIC_GUESS and CV_CALIB_FIX_INTRINSIC to be enabled but it did not help. So, I am trying to figure out what is wrong. Another question regarding the stereoCalibrate() that I have is one of the parameters is about the image size, however since I have two image sizes (i.e. 1920x1080 and 512x424) which one should I use (now I am using the 512x424 since this is the size that I want to map the rgb pixel values, though I am not sure if that is correct).

If I continue and use the extracted values from above despite, the bad rms the rectification is not that good. Therefore I would appreciate any feedback from someone that has any experience. Thanks.

Update

OK after playing a bit more I managed to obtain a good rms from stereoCalib() as well (i.e. ~0.4). It seems that the pictures from rgb camera that I was using before were quite blurry so I shot a new part of videos and it helped. You can also see in the images below the result which is good, notice the green horizontal lines along ... (more)

edit retag flag offensive close merge delete

Comments

I am not sure but the ring on the first three images looks like this question.

Eduardo gravatar imageEduardo ( 2015-12-07 13:20:45 -0600 )edit

Thanks @Eduardo I will have a look on it.

theodore gravatar imagetheodore ( 2015-12-08 03:41:31 -0600 )edit

So ,can you sure that the very large stereoRectify() rms was due to the blurry pictures?? I have the same problem. what's more, my problem is even worse. my rms of stereoRectify() is never below 30!!

german_iris gravatar imagegerman_iris ( 2016-01-29 03:38:10 -0600 )edit

for sure the blurry images will cause a problem. So, if you can avoid having blurry images I would suggest that.

theodore gravatar imagetheodore ( 2016-01-29 03:53:18 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2016-03-24 05:09:34 -0600

kbarni gravatar image

updated 2016-03-24 12:19:55 -0600

If I understand you correctly, you want to map the Depth (or IR) frame over the RGB frame, using kinect SDK V2.

I don't really understand, why do you want to calibrate the sensor yourself and match the frames in OpenCV. The Kinect is very well calibrated, and the SDK provides you all the needed functions in the CoordinateMapper class (frame-to-frame, point-to-frame). It's also much faster (it uses directly the depth data and precomputed correction tables and it runs on the GPU).

These functions are present in OpenNI and libfreenect2 as well.

[Answer updated based on the comments below]

Note that the precise correction parameters are included in the firmware of each Kinect - so better use this than your own correction data. If you are using offline registration, you should still use these parameters.

You can extract these parameters using libfreenect2 (Freenect2Device::getIrCameraParams and getColorCameraParams).

Then you can use the Registration::distort and Registration::depth_to_color functions with these parameters (from the registration.cpp).

If you don't have a Kinect, you might try to use the parameters I extracted from my sensor:

IR: fx=365.481 fy=365.481 cx=257.346 cy=210.347 k1=0.089026 k2=-0.271706 k3=0.0982151 p1=0 p2=365.481

Color: fx=1081.37 fy=1081.37 cx=959.5 cy=539.5 shift_d=863 shift_m=52 mx_x3y0=0.000449294 mx_x0y3=1.91656e-05 mx_x2y1=4.82909e-05 mx_x1y2=0.000353673 mx_x2y0=-2.44043e-05 mx_x0y2=-1.19426e-05 mx_x1y1=0.000988431 mx_x1y0=0.642474 mx_x0y1=0.00500649 mx_x0y0=0.142021 my_x3y0=4.42793e-06 my_x0y3=0.000724863 my_x2y1=0.000398557 my_x1y2=4.90383e-05 my_x2y0=0.000136024 my_x0y2=0.00107291 my_x1y1=-1.75465e-05 my_x1y0=-0.00554263 my_x0y1=0.641807 my_x0y0=0.0180811

You might try to copy the functions I mentioned earlier from registration.cpp and use them with the parameters above.

edit flag offensive delete link more

Comments

I have recorded some frames (Color and Depth as png type) at 30fps and I didn't saved coordinateMapper. My goal is to post processing these frames and compute 3D points from color map. I understand that the only way to do this is to make a stereoCalibration of the Kinect that I used. Then i'll compute 3D point with depth map and apply rotation and translation matrix.

Did I miss something or this way is a correct way ?

cocs78 gravatar imagecocs78 ( 2016-03-24 07:27:51 -0600 )edit

you say you have the Color and Depth frames from the Kinect, I still don't understand why do you want to compute the depth map??? The TOF reconstructed depth map is much better than the one you can obtain with stereoscopic reconstruction.

kbarni gravatar imagekbarni ( 2016-03-24 11:14:03 -0600 )edit

First of all thank for helping me.

Secondly, I don't want to compute depth map. I just want to get a 3D point for a pixel on the color map. It's just like the MapColorPointToCameraSpacePoint function in coordinatemapper class

cocs78 gravatar imagecocs78 ( 2016-03-24 11:36:38 -0600 )edit
1

If you got the Kinect, you can extract the necessary parameters for the registration using libfreenect2 (Freenect2Device::getIrCameraParams and getColorCameraParams).

Then you can use the Registration::distort and Registration::depth_to_color functions with these parameters (from the registration.cpp).

kbarni gravatar imagekbarni ( 2016-03-24 11:53:11 -0600 )edit

Thank a lot i'll take a look on libfreenect2 and i'll tell you if it's good

cocs78 gravatar imagecocs78 ( 2016-03-24 12:03:45 -0600 )edit

Here are the parameters from my sensor:

IR: fx=365.481 fy=365.481 cx=257.346 cy=210.347 k1=0.089026 k2=-0.271706 k3=0.0982151 p1=0 p2=365.481

Color: fx=1081.37 fy=1081.37 cx=959.5 cy=539.5 shift_d=863 shift_m=52 mx_x3y0=0.000449294 mx_x0y3=1.91656e-05 mx_x2y1=4.82909e-05 mx_x1y2=0.000353673 mx_x2y0=-2.44043e-05 mx_x0y2=-1.19426e-05 mx_x1y1=0.000988431 mx_x1y0=0.642474 mx_x0y1=0.00500649 mx_x0y0=0.142021 my_x3y0=4.42793e-06 my_x0y3=0.000724863 my_x2y1=0.000398557 my_x1y2=4.90383e-05 my_x2y0=0.000136024 my_x0y2=0.00107291 my_x1y1=-1.75465e-05 my_x1y0=-0.00554263 my_x0y1=0.641807 my_x0y0=0.0180811

You might try to copy the functions I mentioned earlier from libfreenect and use them with the parameters above.

kbarni gravatar imagekbarni ( 2016-03-24 12:10:48 -0600 )edit

@cocs78 sorry for my delayed response. In my case I abandoned the use of the official SDK and I am using the libfreenect2 library as well. Using the libfreenect2 I was able to save the rgb and depth images which were also calibrated from the registration->apply() function. By using this function you can also get straight forward also the RGBXYZ info. However, in my case when I did the recording I was not aware in order to save this information. So, now that I needed to create a point cloud of my data I had to manually extract the camera parameters as @kbarni tells you above and then create the point cloud by using the pcl library. I can provide you this code if you want.

theodore gravatar imagetheodore ( 2016-03-24 17:26:50 -0600 )edit

Ok thank you for your reply, I will try it today. Of course, if you could share your code with me, it will be really helpful. How can we proceed ? GitHub ?

cocs78 gravatar imagecocs78 ( 2016-03-25 03:49:42 -0600 )edit

@cocs78 with the following you should be fine. Get the depth.h and depth.cpp and create your point cloud as follows:

tools::Camera cameraParams(959.5, 539.5, 1081.37, 1081.37);
cv::Mat mat = cv::imread("color_img");
cv::Mat depth = cv::imread("depth_img", CV_LOAD_IMAGE_ANYDEPTH);

pcl::PointCloud<pcl::PointXYZRGBA>::Ptr pointCloud = tools::Depth::pointCloudFromBGRAndDepth(mat, depth, cameraParams);

pcl::visualization::PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloud<pcl::PointXYZRGBA>(pointCloud, "point cloud");

while (!viewer.wasStopped()) { viewer.spinOnce (); }
theodore gravatar imagetheodore ( 2016-03-25 18:25:28 -0600 )edit
0

answered 2016-03-24 04:10:42 -0600

cocs78 gravatar image

updated 2016-03-24 07:10:23 -0600

Hi

Did you succeed ?

I'm currently doing the same as you with Kinect v2 but unfortunately , I have problems with the stereocalibration. My rms is really bad. I don't know if I have to calibrate Color and Infrared map separately and then give instrinsics parameters to cv::stereoCalibration or do i have to let's cv::stereocalibration calibrate each camera. Moreover, can you tell me how many frames did you used to make your calibration ?

I think you should take a look at this topic : http://nicolas.burrus.name/index.php/... it will allow you to project your points

edit flag offensive delete link more

Question Tools

3 followers

Stats

Asked: 2015-12-07 04:29:03 -0600

Seen: 3,392 times

Last updated: Mar 24 '16