# Image Stitching (Java API)

Hi, I'm trying to stitch two images together, using the OpenCV Java API. I have been following this C++ tutorial here http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/ However, I get the wrong output and I cannot work out the problem.

FAULTY CODE

public class ImageStitching {

static Mat image1;
static Mat image2;

static FeatureDetector fd;
static DescriptorExtractor fe;
static DescriptorMatcher fm;

public static void initialise(){
fd = FeatureDetector.create(FeatureDetector.BRISK);
fe = DescriptorExtractor.create(DescriptorExtractor.SURF);
fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);

//images

//structures for the keypoints from the 2 images
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();

//structures for the computed descriptors
Mat descriptors1 = new Mat();
Mat descriptors2 = new Mat();

//structure for the matches
MatOfDMatch matches = new MatOfDMatch();

//getting the keypoints
fd.detect(image1, keypoints1);
fd.detect(image1, keypoints2);

//getting the descriptors from the keypoints
fe.compute(image1, keypoints1, descriptors1);
fe.compute(image2,keypoints2,descriptors2);

//getting the matches the 2 sets of descriptors
fm.match(descriptors2,descriptors1, matches);

//turn the matches to a list
List<DMatch> matchesList = matches.toList();

Double maxDist = 0.0; //keep track of max distance from the matches
Double minDist = 100.0; //keep track of min distance from the matches

//calculate max & min distances between keypoints
for(int i=0; i<keypoints1.rows();i++){
Double dist = (double) matchesList.get(i).distance;
if (dist<minDist) minDist = dist;
if(dist>maxDist) maxDist=dist;
}

System.out.println("max dist: " + maxDist );
System.out.println("min dist: " + minDist);

//structure for the good matches

//use only the good matches (i.e. whose distance is less than 3*min_dist)
for(int i=0;i<descriptors1.rows();i++){
if(matchesList.get(i).distance<3*minDist){
}
}

//structures to hold points of the good matches (coordinates)

List<KeyPoint> keypoints_objectList = keypoints1.toList();
List<KeyPoint> keypoints_sceneList = keypoints2.toList();

//putting the points of the good matches into above structures
for(int i = 0; i<goodMatches.size(); i++){
}

System.out.println("\nNum. of good matches" +goodMatches.size());

MatOfDMatch gm = new MatOfDMatch();
gm.fromList(goodMatches);

//converting the points into the appropriate data structure
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);

MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);

//finding the homography matrix
Mat H = Calib3d.findHomography(obj, scene);

Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);

obj_corners.put(0,0, new double[]{0,0});
obj_corners.put(0,0, new double[]{image1.cols(),0});
obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()});
obj_corners.put(0,0,new double[]{0,image1.rows()});

Core.perspectiveTransform(obj_corners, scene_corners, H);

//structure to hold the result of the homography matrix
Mat result = new Mat();

//size of the new image - i.e. image 1 + image 2
Size s = new ...
edit retag close merge delete

Actually half is not a function but a new matrix based on the result matrix which takes half of the dimensions. Half is the name of the variable, not a function. This is a basic matrix constructor. Understand?

( 2014-02-05 03:12:12 -0500 )edit
1

The constructor would be: Mat half = new Mat(result, new Rect(0,0,image2.cols(),image2.rows())); Also have a look at Features2d.drawMatches to see if your matches are okay.

( 2014-02-06 02:28:23 -0500 )edit

I figured it out after posting the question, and I used the constructor you specified. However, my image stitching doesn't actually work :/ I updated the code. Maybe you can spot something wrong? Is it the way I'm combing the 2 images?

( 2014-02-06 19:35:02 -0500 )edit

For FeatureDetection I use (and the guy from the tutorial) also Surf like the extractor and the matcher is flannbased. If you have a look at your detected features than it should be clear what happen. There are diagonal matches in a horizontal panorama. They are wrong! you must sort them out.

( 2014-02-07 00:36:28 -0500 )edit

Indeed, before applying the stitiching, first apply for example ransac, in order to remove all these false matches and only keep horizontal ones.

( 2014-02-07 01:54:53 -0500 )edit

Thanks for the responses. I have used this version of the homography function: Mat H = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, 1) ...unfortunately, it hasn't helped - in the documentation the re-projection error (4th parameter) is recommended to be 1-10 - i have tried them all to no avail. I have experimented with a number of algorithm combinations for detection, extraction and matching - again, no success. I tried SURF,SURF and FLANN combination, but no success - the number of 'good' matches I got was 20. The diagonal lines tell me something is wrong, yes - but I don't know how to fix it.

( 2014-02-08 17:00:47 -0500 )edit

When you separate the matches into good and bad ones also have a look where they are located. If you take a horizontal panorama than there can´t be a match at pixel(y,x) (5, 20) and (70, 40). --> Point point1 = keyPoints1[matchesList.get(i).queryIdx].pt; Point point2 = keyPoints2[matchesList.get(i).trainIdx].pt; if (Math.abs(point1.y - point2.y) > 10) // cant be a good match Try also to use grayscale images for detector.

( 2014-02-10 02:07:57 -0500 )edit

@mayday hey we have similar problem and we cant figure out did u find a solution can u help us?

( 2014-03-15 07:32:41 -0500 )edit

Sort by » oldest newest most voted

The problem here is not only in those diagonal matches (incorrect obviously) but also in most of the matches, which are pretty bad. In order to see how bad the resulting homography is check the answer here and here. Note also these things:

• your scene is pretty low on texture. You have many flat objects (planes!) with overall the same colour (LCD screen -> black, wall -> white-blueish, table -> brownish). Try adding some additional objects (a flower maybe? :)) and see how this affects the homography estimation
• you've added a bag in the second image in the overlapping area, which omho might lead to some confusion
• try ORB or some other feature detector
• try BruteForce matcher (I rarely use the FLANN since BF usually does the trick); try different settings for the cross-check matching (BFMatcher's parameter). Using cross-check means that feature A from image 1 is matched with feature A' from image 2 and vice versa, which often reduces the number of false positives greatly
• try another filter for your good matches (note that appyling multiple filters for example (cross-check + min/max distance) might reduce your matches so much that RANSAC fails to estimate a homography) such as the ratio test; there are also other match filters online
more
more

1

( 2016-12-15 04:33:56 -0500 )edit

especially since you blog is in russian, totally unreadable by 90% of the community :D

( 2016-12-20 04:38:53 -0500 )edit

Yes but I can read Java. It's not hard code to follow and anyone interested enough can run it through a page translator so +1. You other guys have first world issues.

( 2016-12-25 21:00:01 -0500 )edit

Official site

GitHub

Wiki

Documentation