Ask Your Question
0

Image Registration Between 2 Cameras of Fixed Distance

asked 2019-07-11 10:16:25 -0500

So I am trying to calculate the NDVI, which requires me to have aligned images from both my infrared camera and my normal color camera. Both cameras are actually part of the Intel RealSense camera that I'm using, so they are a fixed distance apart. I am trying to automatically perform image registration between every frame captured by the infrared camera to the frames captured by the color camera.

To do this, I have used ORB features to find common points and the RANSAC algorithm to exclude outliers and generate a homography matrix, which I then applied to the infrared capture with warpPerspective so it would be aligned with the color capture. I found that doing this caused the image to shake and sway unacceptably between frames. Since the difference between the two cameras is a fixed physical constant, I simply averaged the homography matrices over many iterations until it converged into a stable matrix. This yielded great results! But it's not perfect...

I notice that relative features of the images are not aligned. For instance, there is a pillar in the background. When I position my finger in the foreground so that it just barely "touches" the pillar in the color capture, my finger has a small, but non-negligible separation (of about 10 pixels) from the pillar in the infrared capture. To perfectly align the images would require a tiny (right-handed) rotation along the y-axis (and perhaps other things). After making small increases and decreases to each entry of the homography matrix, it seems to me that there is no possible homography matrix that could solve this issue!

So is there a more complete solution I can use to perfectly align infrared images coming from a camera a fixed distance apart from the color image's camera? I list some extra information for an idea that I have, but if you already know a solution that would work, don't be tainted by the discussion below!

More Information The RealSense cameras have an API that allow me to align the original depth images with the color images (and in their API, there is no way to align two non-depth images to each other, as I am trying to do). However, according to a post I read when I was looking for more information, the original depth image (unaligned to the color image) is by default aligned to the left infrared camera! So I was suggested to find a map that captures the relationship between the unaligned depth image to the aligned depth image so that I could apply that map to the infrared image.

In general, I am looking for a matrix that captures the relationship between the color capture and infrared capture, such that all I need to do is apply that matrix to the infrared capture to align it with the color capture. I am using Python 2.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-07-11 12:13:25 -0500

Eduardo gravatar image

Homography holds for the following cases:

  • planar scene
  • general world but rotational motion (e.g. panorama)

If you want to align the left infrared image to the color image, you can:

  • align the color image with the depth stream
  • the left IR and the color images are now aligned since the left IR is aligned with the depth stream by design

Another possibility:

  • compute the [uv] map that aligns the depth image to the color image
  • apply the [uv] map to the left IR image
edit flag offensive delete link more

Comments

I've aligned my depth stream to the color stream (through the realsense API), but I still need to align my infrared stream since the realsense API has no built-in way to do this. The second possibility sounds exactly like what I need though!

I have access to the original depth image as well as the aligned depth image. How can I compute the uv map that would emulate that alignment?

NickavGnaro gravatar imageNickavGnaro ( 2019-07-11 12:30:47 -0500 )edit

For the first case, you have to align the color stream to the depth map, not the depth to the color, see here.

For the second case, something like:

  • [u,v]_depth --> [X, Y, Z]_depth (using depth map and depth intrinsics)
  • [X, Y, Z]_depth --> [X, Y, Z]_color (using extrinsics between depth and color)
  • [X, Y, Z]_color --> [u,v]_color (using color intrinsics)

You may want to use bilinear interpolation for better results.

Eduardo gravatar imageEduardo ( 2019-07-12 03:12:40 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower

Stats

Asked: 2019-07-11 10:16:25 -0500

Seen: 34 times

Last updated: Jul 11