Ask Your Question

DaePa's profile - activity

2017-05-28 07:51:14 -0600 commented question Get the label of a bottle without moving the bottle

Your problem is exactly an image stitching one, imho.

Once you have all the images, my suggestion would be to stitch them on a cylinder and then warp the cylinder out to straighten the label and extract it (via object recognition, for instance).

2017-05-27 02:11:56 -0600 received badge  Enthusiast
2017-05-25 15:51:33 -0600 commented question OpenCV Not stitching all Images

If you are interested in more details, I would point you out to this question of mine: http://answers.opencv.org/question/15...

2017-05-25 15:50:36 -0600 commented question Aligning and stitching images based on defined feature using OpenCV

however it doesn't find the plus symbol very well.

Have you tried to perform image registration? The code you provided is just a detector output. You have a small, yet sufficient number of feature points describing the plus sign. Were you able to detect keypoints on the second image and align them together? (use the function cv2.drawMatches() to see which keypoint match in a pair of images).

2017-05-25 13:39:15 -0600 answered a question Stitching images with little detail

It seems you are asking quite a lot of things here. I'll try to provide you some information, but please notice: I'm not entirely sure of what you are asking here. Please provide me some more detail, and I'll do my best.

I'm trying to make an image of the southern sky, horizon-to-zenith, south-centered, on the iPhone. As many of the images in the set will have little detail, it's just sky, image-matching stitching will not work well. I'm looking for solutions.

What do you mean? Have you actually tried or are you just assuming so?

Following the basic outline here, is it possible to skip steps 1 and 2, and build the homography matrix directly? I have the original alignments. It would seem that would solve the problem, as well as dramatically reducing CPU which is an issue on my iPhone.

Your question here is ill-posed. Step 1, that is detecting keypoints, serves the need to convert the image into a "feature space", that is to translate it in a coordinate map of points. You will later use these points to relate pair of images (in step 2).

You are doing this because you are trying to create a homography based on the matches you retrieved. A homography will serve you to reposition the pixels of the images onto a common surface. But, more importantly, you may say that the homography contains the information on the camera movements. If you already have information on camera movements, then you don't have to guess them from the images. Just feed the homography you already possess to the stitching pipeline.

If that is not possible, my next idea is to make large images containing multiple originals - if they always include the horizon at the bottom they should be able to match OK.

I don't understand what you are trying to say here.

So my main question (finally!)... does OpenCV work OK with wide-angle images like this? If I set my camera so I get 90 vertical I would get 60 degrees horizontal, and with 20 degrees of overlap (less, more?) that means about five images to cover the 180 degree horizon.

The more overlap area you have between images, the better, up to a certain point. 20% to 60% overlap seems reasonable to me, but it really depends on the materials you have, in the end. I'm working on 90%+ overlap area right now, just to give you an idea.

Now, at one point, you decide a common surface on which to project your images using the homography you computed pairwise-ly. In your case, I'd suggest you to chose a cylinder or a sphere as a surface. If you have troubles picturing this, imagine you stand in the center of a room with all of your pictures projected on the walls. What would be the best surface of the walls to be able to see the panorama without major distortions?

At this ... (more)

2017-05-25 09:54:22 -0600 asked a question Understanding how cv::detail::leaveBiggestComponent() function works in opencv 3.2

I am having troubles understanding how the function cv::detail::leaveBiggestComponent works, as little to no documentation is available.

I know what the function is supposed to do, that is, given a set of keypoint matches between images, returning the largest subset of images with consistent matches. In addition, according to what its implementations states, it should also remove image duplicates.

The function is part of the Stitching Pipeline of opencv, and, if one refers to Brown and Lowe paper, should perform as panorama recognition module.

However, when it comes to breaking down the code, I can't really understand how this is done.

TL;DR I'm looking for a pseudocode explanation of the flowchart of cv::detail::leaveBiggestComponent(), please help.

The code implementation is here. It calls relevant code (with no documentation either) from here (implementation) and here (headers).

Of particular interest is the working principle of cv::detail::DisjointSets(), too.

2016-10-26 16:03:53 -0600 received badge  Supporter (source)
2016-10-26 15:59:40 -0600 commented answer Example Stitching_detailed and refine_mask

I see your point, now. I was assuming another type of bundleAdjuster would have used the refine_mask(0,1) parameter, but, as far as I can understand (and research with provided tools), refinement_mask is a parameter only BundleAdjusterReproj is aware of. I am editing my answer to reflect this.

Also, one should notice that refine_mask is a 3x3 matrix, thus one row of values is always set to 0 and never accessed. (what have I just read?)

2016-10-26 15:00:37 -0600 commented question [Stitching_detailed.cpp] Matcher as cv::Ptr discussion thread.

@berak something really strange is happening. This evening, when I was working on the code, I got a bunch of errors. I commented out the not working section of the code and wrote the working one. Now, switching comments, the error is not showing up anymore. I don't know what to say. Please notice that I first had the error two days ago.

I have edited my original question to reflect this.

compiler is g++ opencv 3.1 Ubuntu 16.04 LTS

2016-10-26 13:41:56 -0600 answered a question Example Stitching_detailed and refine_mask

No, that is not right. If you need a confirm, you can look at the very beginning of the tutorial, where the printUsage() function is. Right after, you can see the default setting is xxxxx.

The refinement mask contains five parameters (in this order):

<fx>
<skew>
<ppx>
<aspect>
<ppy>

if set to x the parameter is corrected by the adjuster (if the adjuster can do so, otherwise the parameter is skipped).

As you are stating, thanks for having the patience to discuss this, even if the <skew> parameter is set in the stitching_detailed tutorial, only BundleAdjusterReproj, out of all the types of BundleAdjusters actually cares for the values of the mask, and it does not take into account the <skew> possibility.

Thus yes, the parameter is set, but actually not used by any class.


Some more thoughts: the BundleAdjusterRay description opens a way of possible future implementation of the refine_mask. That was true in opencv 2.4 and it is still the same in opencv 3.1 documentation.

In Line 172 of motion_estimators.hpp, the file from where the documentation is derived, the refinement_mask is described by a 3x3 matrix. That is not right, right? Shouldn't it be a 2x3, instead? Am I missing something now?

2016-10-26 13:25:22 -0600 asked a question camera_params convert to CV_32F: what is it for in stitching_detailed.cpp

Hello!

I am practicing the stitching_detailed.cpp tutorial from itseez opencv latest repository.

At one point, this conversion is called:

for (size_t i = 0; i < cameras.size(); ++i)
    {
        Mat R;
        cameras[i].R.convertTo(R, CV_32F);
        cameras[i].R = R;
        LOGLN("Initial camera intrinsics #" << indices[i]+1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);
    }

question: what is it for, exactly?

question: Is there any reason why cv::detail::Estimator class should output a type that the program needs explicitly to convert?

Thank you.

2016-10-26 10:22:45 -0600 commented question [Stitching_detailed.cpp] Matcher as cv::Ptr discussion thread.

Yes, I've got that figured out. However, my question is not about the cv::Ptr. Instead, it is about why the Ptr pointer call fails with a little (and somehow innocent) edit to the tutorial (stitching_detailed). Is the pointer needed because a loop cycle is following && one must call the pointed object inside the loop?

2016-10-26 10:16:54 -0600 received badge  Editor (source)
2016-10-26 03:59:19 -0600 asked a question [Stitching_detailed.cpp] Matcher as cv::Ptr discussion thread.

Hello everyone,

I encountered a thing while fiddling around with the stitching_detailed.cpp tutorial provided on source repository. Before proceeding please note that I had to improvise myself a C++ developer expert, I hope not to be trivial on my considerations.

In the tutorial, get straight to the point where the cv::FeaturesMatcher is defined.

On Eclipse C++ IDE, with fully set up project, the following will not compile (g++):

One can define the matcher in two different ways, one being

Ptr<FeaturesMatcher> matcher;
matcher = makePtr<BestOf2NearestMatcher>(try_cuda, match_conf);
(*matcher)(features, pairwise_matches);

The issue is on line 3, where we call the matcher as a pointer. When that happens, a conflicting declaration appears to occur within matcher and pairwise_matches.

On the other hand, the following will compile just fine:

And the other, more direct, being

BestOf2NearestMatcher matcher(tryCuda,matchingConfidence);
matcher(features, pairwiseMatches);
matcher.collectGarbage();

Question: why would I need to instantiate a matcher as a pointer when it works perfectly fine otherwise?

I wouldn't bother that much myself about it, but here is another (less trivial?) question. The following code works just fine:

cv::Ptr<cv::detail::FeaturesFinder> finder;
finder = cv::makePtr<cv::detail::SurfFeaturesFinder>();
for(int currentImage=0;currentImage<imagesCount;currentImage++) {
   (*finder)(images[currentImage],features[currentImage]);
}

For sake of readability, one would expect that a similar construction is going to work even with the matcher, not only with the finder. But this is not the case. Aside question: would it be a waste of time to annotate the tutorial to explain why things have to work this way?

Thank you for your time.