Ask Your Question
0

Replace a part of image with another

asked 2013-10-03 01:49:47 -0600

newBee gravatar image

updated 2013-10-09 06:18:53 -0600

Hi, I created an OpenCV matching using this to match an image

As you can see i got the match using the corners from obj_corners array which is a Point2F, I am trying now to extract the squared part from the image(the target image i mean) into a Mat and replace it with another image

I tried using

Rect roi(10, 20, 100, 50);
cv::Mat destinationROI = img_matches( roi );
smallImage.copyTo( destinationROI );
cv::imwrite("images/matchs2.bmp",destinationROI);

but I am not getting any fruitful result, please suggest what to do ?, How do replace the found target with new image ?

I saw addweighted but dont know how to implement it

EDIT Here is

Mat H = findHomography(obj, scene, CV_RANSAC);
    //Get corners from the image
    std::vector<Point2f> obj_corners(4);
    obj_corners[0] = cvPoint(0,0);
    obj_corners[1] = cvPoint( img1.cols, 0 );
    obj_corners[2] = cvPoint( img1.cols, img1.rows );
    obj_corners[3] = cvPoint( 0, img1.rows );
    std::vector<Point2f> scene_corners(4);
    perspectiveTransform(obj_corners,scene_corners,H);
    //-- Draw lines between the corners (the mapped object in the scene - image_2 )
    line( img_matches, scene_corners[0] + Point2f( img1.cols, 0), scene_corners[1] + Point2f( img1.cols, 0), Scalar(0, 255, 0), 4 );
    line( img_matches, scene_corners[1] + Point2f( img1.cols, 0), scene_corners[2] + Point2f( img1.cols, 0), Scalar( 0, 255, 0), 4 );
    line( img_matches, scene_corners[2] + Point2f( img1.cols, 0), scene_corners[3] + Point2f( img1.cols, 0), Scalar( 0, 255, 0), 4 );
    line( img_matches, scene_corners[3] + Point2f( img1.cols, 0), scene_corners[0] + Point2f( img1.cols, 0), Scalar( 0, 255, 0), 4 );

so how do i code scene_corners[0] + Point2f( img1.cols, 0) as Rect's location

and the obj_corners values are

 obj_corners[0]  0.000000
 obj_corners[1]  0.000000
 obj_corners[2]  1116892853566439400.000000
 obj_corners[3]  1116892707587883000.000000

EDIT2 For example consider this This is image1 and the test image is this. I need to replace the car(1st image) with another image like this

Note: If possible it should stretch as per the image is stretched

EDIT 3

These are the two lines i use to draw line on image

line(image_scene,P1,P2, Scalar( 0, 255, 0), 4);
line( image_scene, scene_corners[0], scene_corners[1] , Scalar(0, 255, 0), 4 );

The P1 and p2 are

CvPoint P1,P2;
P1.x=1073;
P1.y=1081;
P2.x=0;
P2.y=0;

so when i do a printf to get its values, i get like this

printf("image_scene => %d %d\n",image_scene.size().width, image_scene.size().height);
printf("P1 & P2 => %d & %d :: %d & %d \n",P1.x, P1.y, P2.x, P2.y);
printf("scene_corners[0] & scene_corners[1] => %d & %d :: %d & %d \n",scene_corners[0].x,scene_corners[0].y ,scene_corners[1].x ,scene_corners[1].y);

Output

image_scene => 2048 1536
P1 & P2 => 1073 & 1081 :: 0 & 0
scene_corners[0] & scene_corners[1] => -1073741824 & 1081308864 :: -2147483648 & 1081400579

so as you see the image width is just 2048 by 1536 so the x and y should be in between these ranges right ? but what i get it 1073741824,1081308864.. like these numbers

so basically the scene_corners[0 ... (more)

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2013-10-03 02:09:45 -0600

Moster gravatar image

updated 2013-10-03 09:55:33 -0600

I quickly tested it with 2 test images. The 2nd smaller one showed up within the bigger one.

cv::Size t = smallImage.size();
Mat roi(img_matches,cv::Rect(0,0, t.width, t.height) );
smallImage.copyTo(roi);

Edit:

#include "highgui/highgui.hpp"
#include "nonfree/nonfree.hpp"
#include "features2d/features2d.hpp"
#include "calib3d/calib3d.hpp"
#include "imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>

using namespace cv;
using namespace std;

void readme();

/** @function main */
int main( int argc, char** argv )
{
  if( argc != 4 )
  { readme(); return -1; }

  cv::initModule_nonfree();

  Mat img_object = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
  Mat img_scene = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
  Mat img_replacement = imread(argv[3], CV_LOAD_IMAGE_GRAYSCALE);

  if( !img_object.data || !img_scene.data )
  { std::cout<< " --(!) Error reading images " << std::endl; return -1; }

  //-- Step 1: Detect the keypoints using SURF Detector
  int minHessian = 300;

  SurfFeatureDetector detector( minHessian, 4, 2, true, false );

  std::vector<KeyPoint> keypoints_object, keypoints_scene;

  detector.detect( img_object, keypoints_object );
  detector.detect( img_scene, keypoints_scene );

  //-- Step 2: Calculate descriptors (feature vectors)
  SurfDescriptorExtractor extractor;

  Mat descriptors_object, descriptors_scene;

  extractor.compute( img_object, keypoints_object, descriptors_object );
  extractor.compute( img_scene, keypoints_scene, descriptors_scene );

  //-- Step 3: Matching descriptor vectors using FLANN matcher
  FlannBasedMatcher matcher;
  std::vector< DMatch > matches;
  matcher.match( descriptors_object, descriptors_scene, matches );

  double max_dist = 0; double min_dist = 100;

  //-- Quick calculation of max and min distances between keypoints
  for( int i = 0; i < descriptors_object.rows; i++ )
  { double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
  }

  printf("-- Max dist : %f \n", max_dist );
  printf("-- Min dist : %f \n", min_dist );

  //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
  std::vector< DMatch > good_matches;

  for( int i = 0; i < descriptors_object.rows; i++ )
  { if( matches[i].distance < 3*min_dist )
     { good_matches.push_back( matches[i]); }
  }

  Mat img_matches;


  //-- Localize the object
  std::vector<Point2f> obj;
  std::vector<Point2f> scene;

  for( unsigned int i = 0; i < good_matches.size(); i++ )
  {
    //-- Get the keypoints from the good matches
    obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
  }

  Mat H = findHomography( obj, scene, CV_RANSAC );

  //-- Get the corners from the image_1 ( the object to be "detected" )
  std::vector<Point2f> obj_corners(4);
  obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
  obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
  std::vector<Point2f> scene_corners(4);

  perspectiveTransform( obj_corners, scene_corners, H);

  Mat help;
  cv::resize(img_replacement, help, img_object.size());
  warpPerspective(help, img_replacement, H, img_scene.size());

  Mat mask = cv::Mat::ones(img_object.size(), CV_8U);
  Mat mask2;
  warpPerspective(mask, mask2, H, img_scene.size());

  img_replacement.copyTo(img_scene, mask2);

  //-- Draw lines between the corners (the mapped object in the scene - image_2 )
/*  line( img_scene, scene_corners[0] , scene_corners[1], Scalar(0, 255, 0), 4 );
  line( img_scene, scene_corners[1] , scene_corners[2], Scalar( 0, 255, 0), 4 );
  line( img_scene, scene_corners[2] , scene_corners[3], Scalar( 0, 255, 0), 4 );
  line( img_scene, scene_corners[3] , scene_corners[0], Scalar( 0, 255, 0), 4 );

  drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );*/

  //-- Show detected matches
  imshow( "Good Matches & Object detection", img_scene );
  //imshow(" Derp", img_replacement);
  waitKey(0);
  return 0;
  }

  /** @function ...
(more)
edit flag offensive delete link more

Comments

This works fine but draws image at top left of the image, but i need to draw it at the images position Check the edit

newBee gravatar imagenewBee ( 2013-10-03 03:42:43 -0600 )edit

Ye, my code was more an example. The task you are asking for is not that easy, since the "Rect" in your output_img is not necessarily a rect anymore due to the perspective transformation. Could you provide your test images, since I have nothing to test here :)

Moster gravatar imageMoster ( 2013-10-03 04:26:37 -0600 )edit

Please check the edit

newBee gravatar imagenewBee ( 2013-10-03 05:13:01 -0600 )edit

Ok, I edited my post and added some code. You need the 3 input images (object, scene and the one you want to fit it). It is not perfect and also has a small bug with the output size. You will see it.

edit: fixed the size

Moster gravatar imageMoster ( 2013-10-03 09:33:36 -0600 )edit

Hey can i get the xy position where the image is displayed means like the corner points ? like (0,0,150,150) for a 150 X 150 image ?

newBee gravatar imagenewBee ( 2013-10-04 07:21:30 -0600 )edit

Just look at the original source code. The points of the projected Image are the scene_corners. You cannot describe them as a Rect, since the projected Image is not necessarily a rectangle in the image anynore.

Moster gravatar imageMoster ( 2013-10-04 07:41:13 -0600 )edit

Ya I did check those points as these values are obj_corners[0] 0.000000 obj_corners[1] 0.000000 obj_corners[2] 1116892853566439400.000000 obj_corners[3] 1116892707587883000.000000

but cant i get an approximate location on the image where this might be drawn ??

newBee gravatar imagenewBee ( 2013-10-06 23:52:02 -0600 )edit

I dont necessarily need a rect, but atleast an approx location on screen ?

newBee gravatar imagenewBee ( 2013-10-06 23:54:09 -0600 )edit

Please check my Edit3, you will get what i mean

newBee gravatar imagenewBee ( 2013-10-09 06:19:42 -0600 )edit

The scene corners are floating points. So try to use %f in your printf

Moster gravatar imageMoster ( 2013-10-09 06:34:51 -0600 )edit

Question Tools

Stats

Asked: 2013-10-03 01:49:47 -0600

Seen: 7,683 times

Last updated: Oct 09 '13