Hi,
I created an OpenCV matching using this to match an image
As you can see i got the match using the corners from obj_corners
array which is a Point2F
, I am trying now to extract the squared part from the image(the target image i mean) into a Mat and replace it with another image
I tried using
Rect roi(10, 20, 100, 50);
cv::Mat destinationROI = img_matches( roi );
smallImage.copyTo( destinationROI );
cv::imwrite("images/matchs2.bmp",destinationROI);
but I am not getting any fruitful result, please suggest what to do ?, How do replace the found target with new image ?
I saw addweighted but dont know how to implement it
EDIT
Here is
Mat H = findHomography(obj, scene, CV_RANSAC);
//Get corners from the image
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0);
obj_corners[1] = cvPoint( img1.cols, 0 );
obj_corners[2] = cvPoint( img1.cols, img1.rows );
obj_corners[3] = cvPoint( 0, img1.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform(obj_corners,scene_corners,H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img1.cols, 0), scene_corners[1] + Point2f( img1.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img1.cols, 0), scene_corners[2] + Point2f( img1.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img1.cols, 0), scene_corners[3] + Point2f( img1.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img1.cols, 0), scene_corners[0] + Point2f( img1.cols, 0), Scalar( 0, 255, 0), 4 );
so how do i code scene_corners[0] + Point2f( img1.cols, 0)
as Rect's location
and the obj_corners values are
obj_corners[0] 0.000000
obj_corners[1] 0.000000
obj_corners[2] 1116892853566439400.000000
obj_corners[3] 1116892707587883000.000000
EDIT2
For example consider this
This is image1 and the test image is this. I need to replace the car(1st image) with another image like this
Note: If possible it should stretch as per the image is stretched
EDIT 3
These are the two lines i use to draw line on image
line(image_scene,P1,P2, Scalar( 0, 255, 0), 4);
line( image_scene, scene_corners[0], scene_corners[1] , Scalar(0, 255, 0), 4 );
The P1 and p2 are
CvPoint P1,P2;
P1.x=1073;
P1.y=1081;
P2.x=0;
P2.y=0;
so when i do a printf to get its values, i get like this
printf("image_scene => %d %d\n",image_scene.size().width, image_scene.size().height);
printf("P1 & P2 => %d & %d :: %d & %d \n",P1.x, P1.y, P2.x, P2.y);
printf("scene_corners[0] & scene_corners[1] => %d & %d :: %d & %d \n",scene_corners[0].x,scene_corners[0].y ,scene_corners[1].x ,scene_corners[1].y);
Output
image_scene => 2048 1536
P1 & P2 => 1073 & 1081 :: 0 & 0
scene_corners[0] & scene_corners[1] => -1073741824 & 1081308864 :: -2147483648 & 1081400579
so as you see the image width is just 2048 by 1536 so the x and y should be in between these ranges right ? but what i get it 1073741824,1081308864.. like these numbers
so basically the scene_corners[0].x is not giving proper values