Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Image Registration by Manual marking of corresponding points using OpenCV

  1. I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).

processed binary image

  1. I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position

enter image description here

note : both cameras are not the same

I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions

All these functions require corresponding points(coordinates) in two images

In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).

I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is *not feasible *because I think I cannot determine and match features from binary and RGB image(am I right??).

If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.

The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)

Note : There is no movement of both cameras.

  • Manually marked rectangles around objects in processed image(binary)
  • Noted down the coordinates of the rectangles
  • Manually marked rectangles around objects in 2nd RGB image
  • Noted down the coordinates of the rectangles
  • Repeated above steps for different samples of 1st binary and 2nd RGB images

Now that I have some 20 corresponding points, I used them in the function as :

findHomography(src_pts, dst_pts, 0) ;

So once I detect an object in 1st image,

  • I drew a bounding box around it,
  • Transform the coordinates of the vertices using the above found transformation,
  • finally draw a box in 2nd RGB image with transformed coordinates as vertices.

    But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..

What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?

Is my approach good...what should I modify/do to make the registration accurate?

Any alternative approach to be used?

Image Registration by Manual marking of corresponding points using OpenCV

  1. I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).

processed binary image

  1. I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position

enter image description here

note : both cameras are not the same

I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions

All these functions require corresponding points(coordinates) in two images

In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).

I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is *not feasible *because I think I cannot determine and match features from binary and RGB image(am I right??).

If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.

The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)

Note : There is no movement of both cameras.

  • Manually marked rectangles around objects in processed image(binary)
  • Noted down the coordinates of the rectangles
  • Manually marked rectangles around objects in 2nd RGB image
  • Noted down the coordinates of the rectangles
  • Repeated above steps for different samples of 1st binary and 2nd RGB images

Now that I have some 20 corresponding points, I used them in the function as :

findHomography(src_pts, dst_pts, 0) ;

So once I detect an object in 1st image,

  • I drew a bounding box around it,
  • Transform the coordinates of the vertices using the above found transformation,
  • finally draw a box in 2nd RGB image with transformed coordinates as vertices.

    But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..

What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?

Is my approach good...what should I modify/do to make the registration accurate?

Any alternative approach to be used?

Image Registration by Manual marking of corresponding points using OpenCV

  1. I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).

processed binary image

  1. I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position

enter image description here

note : both cameras are not the same

I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions

All these functions require corresponding points(coordinates) in two images

In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).

I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is *not feasible *not feasible because I think I cannot determine and match features from binary and RGB image(am I right??).

If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.

The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)

Note : There is no movement of both cameras.

  • Manually marked rectangles around objects in processed image(binary)
  • Noted down the coordinates of the rectangles
  • Manually marked rectangles around objects in 2nd RGB image
  • Noted down the coordinates of the rectangles
  • Repeated above steps for different samples of 1st binary and 2nd RGB images

Now that I have some 20 corresponding points, I used them in the function as :

findHomography(src_pts, dst_pts, 0) ;

So once I detect an object in 1st image,

  • I drew a bounding box around it,
  • Transform the coordinates of the vertices using the above found transformation,
  • finally draw a box in 2nd RGB image with transformed coordinates as vertices.

    But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..

What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?

Is my approach good...what should I modify/do to make the registration accurate?

Any alternative approach to be used?