Ask Your Question
0

How can I fit and then overlay 2 images which have different resolution ?

asked 2016-05-27 10:12:25 -0600

marcoE gravatar image

updated 2016-05-28 07:21:36 -0600

One represents has a mesh, which is supposed to overlay the Layer1 . I didn't find how can I do this using the opencv. I know that is possible change image resolution, however, I don't know how to fit both images

This is the main image: image1 - (2.6 MB)

I have this one, which has the correct mesh to the image above:

image2 - (26.4 MB)

The code to change resolution is more or less this :

#!/usr/bin/python

import cv2
from matplotlib import pyplot as plt
import numpy as np
img1 = cv2.imread('transparency.jpg')

img2 = cv2.imread('La1.png')


row1,cols1, ch1 = img1.shape
row2,cols2, ch2 = img2.shape

res = cv2.resize(img2, None , fx = (1.* row1 /row2 ), fy =(1.* cols1 /cols2 ), interpolation = cv2.INTER_CUBIC)
edit retag flag offensive close merge delete

Comments

check this answer hereaddWeighted should do the trick ;-)

theodore gravatar imagetheodore ( 2016-05-27 11:27:04 -0600 )edit

thx tor the tip @theodore. However, It could only help me with the overlay part. How can I fit properly the shapes? If you look at the booth images they must fit. The mesh should be the "boundary".

marcoE gravatar imagemarcoE ( 2016-05-27 17:46:35 -0600 )edit
1

@marcoE please don't upload such a big files.

sturkmen gravatar imagesturkmen ( 2016-05-28 07:23:17 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-05-28 05:53:10 -0600

theodore gravatar image

updated 2016-05-28 05:56:47 -0600

@marcoE sorry I thought that the two images were already alingned, my bad. Well, what you need is to find the transformation matrix between the two images, with the findHomography() function. In order to do that you need to find at least 4 points that correspond to each other in the two images and then apply the transformation by using the extracted transformation matrix in the warpperspective() function. Usually, people for that they use a feature keypoints extraction algorithm, like surf,sift, etc... find the matched points and then use them to extract the transformation matrix as I described above. You need these 4 points at least, so if you have them somehow from a previous process (contours, canny, whatever...) to the two images you can use them, if not then you need to extract them somehow. Looking at your two images, extracting features with a keypoints algorithm I do not think that it will work. What I see that you can do is to extract the horizontal and vertical lines and use the endpoints as the needed points for the homography and then apply the transformation as I described above.

In order to see about what I am talking have a look in some examples here and here. if you search about warpperspective align two images opencv in the web you will find some other examples as well.

edit flag offensive delete link more

Comments

@theodore thank you for your time, patience and kindness. It is difficult to do what I need to do :\ I need to do something kike that : link text . This was made with imagemagick. However, the images were with lower resolution, and the mesh is different. I think with this image you are able to figure out what I need to do. I will see the algorithm.

marcoE gravatar imagemarcoE ( 2016-05-28 05:58:08 -0600 )edit

I do not think that it is that hard, what you want to do. More or less you have the material you need. You just need to mix them in the proper way. Unfortunately, I do not have that much free time at the moment otherwise I could help you with some code as well.

theodore gravatar imagetheodore ( 2016-05-28 06:27:12 -0600 )edit

@LBerger , Yes, as I mentioned there are different images, with similarities on shape. I want to do something like this: http://i.stack.imgur.com/fyixS.jpg

marcoE gravatar imagemarcoE ( 2016-05-28 15:00:29 -0600 )edit

@LBerger , @theodore , @sturkmen , any brillant idea?

marcoE gravatar imagemarcoE ( 2016-05-30 05:09:12 -0600 )edit

@LBerger the image with the mesh is a render created using matplotlib. From this image, using potrace I got this vector, which allows me get the mesh using trimesh pythom modulel stl, using matplotlib to render, I got the mesh that you've seen .

marcoE gravatar imagemarcoE ( 2016-05-30 07:46:43 -0600 )edit

This is not easier or trivial :\

marcoE gravatar imagemarcoE ( 2016-05-31 06:20:22 -0600 )edit

@LBerger it is not a really alighment .If you look here . you'd notice the mesh (at red) it is the internal countour of the overlaied picture. This example it's with other images and it was done with imagemagick.

marcoE gravatar imagemarcoE ( 2016-05-31 07:34:37 -0600 )edit

In your example you have two images R and B(red and black). I think you want to minimize |R(x,y)-B(ax+x0,by+y0)]^2 relative to unknown a,b,x0,y0

LBerger gravatar imageLBerger ( 2016-05-31 08:03:57 -0600 )edit

@LBerger I think I'm getting what your telling. However, I don't think if it solves the aligment problem. I was thinking about the problem. If at the primary image (the one with the boundaries, which later is used to generate the mesh) it is added points (for instance, holes center), later at the mesh image, could they be matched ? If the points are the same in all images (all the images have holes center) could it solve the problem? Like a plot layer to guide the others. I don't know how to do it, I'm just thinking from the concept point of view.

marcoE gravatar imagemarcoE ( 2016-05-31 15:26:14 -0600 )edit

Something like this : holes with center

marcoE gravatar imagemarcoE ( 2016-06-01 09:21:32 -0600 )edit

both in you mesh image and in you original image there are some parts that are visible in both images, e.g. some squares/circles. Why not to use these in order to extract the four points (minimum) to extract the homography and align your images, as I said in my answer?

theodore gravatar imagetheodore ( 2016-06-01 12:16:03 -0600 )edit

@theodore I think that's what I'm trying to do. Do you think this https://dl.dropboxusercontent.com/u/7... , I'm trying to isolate the same shapes on this image without success. After matching the shapes, what I should do? Find the same shapes on the layers images?

marcoE gravatar imagemarcoE ( 2016-06-02 04:08:49 -0600 )edit

yes you need to extract corresponding matching points from both images, then you just feed them in the findHomography() function and you will get the transformation matrix to use it with wrapPerspective() function. See the links in my answer.

theodore gravatar imagetheodore ( 2016-06-03 02:27:57 -0600 )edit

@theodore thx for answering. I've got a doubt. Do I need to match the points into the mesh image and then here ? On the mesh layer I've got another problem. I've a column of white pixels. How can I get rid off it? mesh image

marcoE gravatar imagemarcoE ( 2016-06-03 03:47:42 -0600 )edit

@theodore In another words. I've got the image which is the source of the mesh, the mesh and layertohavemesh . I want to get something like this . I've found the centers of the 1st image . As I've said the mesh image as a problem, it was a column of white pixels from the upper left corner. Do I need homography on everything? How many points? The msh image and the original image from the mesh are equal. But the layer image not.Th mesh overlays some holes. Sorry; but there is a lot of information.

marcoE gravatar imagemarcoE ( 2016-06-03 04:11:42 -0600 )edit

ok you have the points from the 1st image (you need at least 4, the merrier the better) put them in a vector<Point2f> pts_src. Now find the corresponding points in the layer image, put them in a vector<Point2f> pts_dst and now feed both vectors to the homography like that: Mat h = findHomography(pts_src, pts_dst) now you have the transformation that you need.

theodore gravatar imagetheodore ( 2016-06-03 08:08:35 -0600 )edit

But they aren't the same. They would be similar. Sorry, probably I'm doing some mess.

marcoE gravatar imagemarcoE ( 2016-06-03 08:10:12 -0600 )edit

what do you mean by they aren't the same?

theodore gravatar imagetheodore ( 2016-06-03 08:49:53 -0600 )edit

If look here The holes of the mesh have different sizes of the layer. THe layer has larger countours, which is true

marcoE gravatar imagemarcoE ( 2016-06-03 09:14:12 -0600 )edit

@theodore did you understand what my problem is? I think in a simple chat i could explain easily

marcoE gravatar imagemarcoE ( 2016-06-06 05:03:17 -0600 )edit

try to use then points that would be more robust to changes, like the center of the circle. You also try to use the centers of the mass (look at hu momments and contours) of your boxes. You will always have some error if your points are not an exact much. The point is to find the best case with the less error. I would suggest first to try what I told you in practice and see that it works. Then you can transfer the problem on how to find the best matched points.

theodore gravatar imagetheodore ( 2016-06-06 05:39:41 -0600 )edit

@theodore I agree with you. I was thinking if it would be better create at source image (the one which I use to generate the mesh) 3 or 4 points as guides, and then plot them on the mesh with the transformation and them draw them on the layer.

marcoE gravatar imagemarcoE ( 2016-06-06 05:42:38 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-05-27 10:12:25 -0600

Seen: 4,114 times

Last updated: May 28 '16