2017-05-28 07:51:14 -0600 | commented question | Get the label of a bottle without moving the bottle Your problem is exactly an image stitching one, imho. Once you have all the images, my suggestion would be to stitch them on a cylinder and then warp the cylinder out to straighten the label and extract it (via object recognition, for instance). |
2017-05-27 02:11:56 -0600 | received badge | ● Enthusiast |
2017-05-25 15:51:33 -0600 | commented question | OpenCV Not stitching all Images If you are interested in more details, I would point you out to this question of mine: http://answers.opencv.org/question/15... |
2017-05-25 15:50:36 -0600 | commented question | Aligning and stitching images based on defined feature using OpenCV
Have you tried to perform image registration? The code you provided is just a detector output. You have a small, yet sufficient number of feature points describing the plus sign. Were you able to detect keypoints on the second image and align them together? (use the function |
2017-05-25 13:39:15 -0600 | answered a question | Stitching images with little detail It seems you are asking quite a lot of things here. I'll try to provide you some information, but please notice: I'm not entirely sure of what you are asking here. Please provide me some more detail, and I'll do my best.
What do you mean? Have you actually tried or are you just assuming so?
Your question here is ill-posed. Step 1, that is detecting keypoints, serves the need to convert the image into a "feature space", that is to translate it in a coordinate map of points. You will later use these points to relate pair of images (in step 2). You are doing this because you are trying to create a homography based on the matches you retrieved. A homography will serve you to reposition the pixels of the images onto a common surface. But, more importantly, you may say that the homography contains the information on the camera movements. If you already have information on camera movements, then you don't have to guess them from the images. Just feed the homography you already possess to the stitching pipeline.
I don't understand what you are trying to say here.
The more overlap area you have between images, the better, up to a certain point. 20% to 60% overlap seems reasonable to me, but it really depends on the materials you have, in the end. I'm working on 90%+ overlap area right now, just to give you an idea. Now, at one point, you decide a common surface on which to project your images using the homography you computed pairwise-ly. In your case, I'd suggest you to chose a cylinder or a sphere as a surface. If you have troubles picturing this, imagine you stand in the center of a room with all of your pictures projected on the walls. What would be the best surface of the walls to be able to see the panorama without major distortions? At this ... (more) |
2017-05-25 09:54:22 -0600 | asked a question | Understanding how cv::detail::leaveBiggestComponent() function works in opencv 3.2 I am having troubles understanding how the function cv::detail::leaveBiggestComponent works, as little to no documentation is available. I know what the function is supposed to do, that is, given a set of keypoint matches between images, returning the largest subset of images with consistent matches. In addition, according to what its implementations states, it should also remove image duplicates. The function is part of the Stitching Pipeline of opencv, and, if one refers to Brown and Lowe paper, should perform as panorama recognition module. However, when it comes to breaking down the code, I can't really understand how this is done. TL;DR I'm looking for a pseudocode explanation of the flowchart of cv::detail::leaveBiggestComponent(), please help. The code implementation is here. It calls relevant code (with no documentation either) from here (implementation) and here (headers). Of particular interest is the working principle of cv::detail::DisjointSets(), too. |
2016-10-26 16:03:53 -0600 | received badge | ● Supporter (source) |
2016-10-26 15:59:40 -0600 | commented answer | Example Stitching_detailed and refine_mask I see your point, now. I was assuming another type of bundleAdjuster would have used the refine_mask(0,1) parameter, but, as far as I can understand (and research with provided tools), refinement_mask is a parameter only Also, one should notice that refine_mask is a 3x3 matrix, thus one row of values is always set to 0 and never accessed. (what have I just read?) |
2016-10-26 15:00:37 -0600 | commented question | [Stitching_detailed.cpp] Matcher as cv::Ptr discussion thread. @berak something really strange is happening. This evening, when I was working on the code, I got a bunch of errors. I commented out the not working section of the code and wrote the working one. Now, switching comments, the error is not showing up anymore. I don't know what to say. Please notice that I first had the error two days ago. I have edited my original question to reflect this. compiler is g++ opencv 3.1 Ubuntu 16.04 LTS |
2016-10-26 13:41:56 -0600 | answered a question | Example Stitching_detailed and refine_mask
The refinement mask contains five parameters (in this order): if set to As you are stating, thanks for having the patience to discuss this, even if the Thus yes, the parameter is set, but actually not used by any class. Some more thoughts: the BundleAdjusterRay description opens a way of possible future implementation of the refine_mask. That was true in opencv 2.4 and it is still the same in opencv 3.1 documentation. In Line 172 of motion_estimators.hpp, the file from where the documentation is derived, the |
2016-10-26 13:25:22 -0600 | asked a question | camera_params convert to CV_32F: what is it for in stitching_detailed.cpp Hello! I am practicing the stitching_detailed.cpp tutorial from itseez opencv latest repository. At one point, this conversion is called: question: what is it for, exactly? question: Is there any reason why Thank you. |
2016-10-26 10:22:45 -0600 | commented question | [Stitching_detailed.cpp] Matcher as cv::Ptr discussion thread. Yes, I've got that figured out. However, my question is not about the cv::Ptr. Instead, it is about why the Ptr pointer call fails with a little (and somehow innocent) edit to the tutorial (stitching_detailed). Is the pointer needed because a loop cycle is following && one must call the pointed object inside the loop? |
2016-10-26 10:16:54 -0600 | received badge | ● Editor (source) |
2016-10-26 03:59:19 -0600 | asked a question | [Stitching_detailed.cpp] Matcher as cv::Ptr discussion thread. Hello everyone, I encountered a thing while fiddling around with the stitching_detailed.cpp tutorial provided on source repository. Before proceeding please note that I had to improvise myself a C++ developer expert, I hope not to be trivial on my considerations. In the tutorial, get straight to the point where the
One can define the matcher in two different ways, one being
And the other, more direct, being Question: why would I need to instantiate a matcher as a pointer when it works perfectly fine otherwise?
Thank you for your time. |