Ask Your Question

zenberg's profile - activity

2018-12-31 10:33:23 -0500 received badge  Nice Question (source)
2017-10-09 02:10:50 -0500 received badge  Popular Question (source)
2017-01-05 05:58:48 -0500 received badge  Notable Question (source)
2016-10-02 10:49:52 -0500 received badge  Popular Question (source)
2015-05-21 13:44:47 -0500 received badge  Notable Question (source)
2015-02-11 06:27:56 -0500 received badge  Popular Question (source)
2014-12-09 13:44:31 -0500 marked best answer ROI Errors and the Stitcher module

I have two pictures, each one of them has these dimensions: 2592x1926.

When I try to stitch them using the Stitcher module it takes about 3-4 minutes on the iPhone 4, so I decided to specify ROI to decrease the calculation time.

Here's my code:

cv::Rect rect = cvRect(img1_1.cols/2, 0, img1_1.cols/2, img1_1.rows); //roi is half of image
cv::vector<cv::Rect> roi;
roi.push_back(rect);

cv::vector<cv::vector<cv::Rect>> rois;
rois.push_back(roi);

cv::Mat imgOut;
cv::Stitcher::Status status = stitcher.stitch(images, rois, imgOut);


At the end I got this error:

Finding features...
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /Users/user/slave/ios_framework/src/opencv/modules/core/src/matrix.cpp, line 322



UPDATE
I tried to put two ROIs into the vector. The code looks like this:

cv::Rect rect = cvRect(img1_1.cols/2, 0, img1_1.cols/2, img1_1.rows); //second half of the first image
cv::Rect rect2 = cvRect(0, 0, img2_1.cols/2, img2_1.rows); //first half of the second image

cv::vector<cv::Rect> roi;
roi.push_back(rect);
roi.push_back(rect2);
rois.push_back(roi);

cv::Mat imgOut;
cv::Stitcher::Status status = stitcher.stitch(images, rois, imgOut);

And still I get the same error. Both images have the same size(2592x1936).


Actually I tried to find the appropriate parameters for ROI and found out that the max possible value is:

cv::Rect rect = cvRect(0, 0, 550, 550);


When I'm trying to do this:

cv::Rect rect = cvRect(0, 0, 600, 600);

it sometimes doesn't show the error, but in most cases it does.


UPDATE #2
Now my code looks like this:

cv::Rect rect = cvRect(img1_1.cols/2, 0, img1_1.cols/2, img1_1.rows);
cv::vector<cv::Rect> roi;
roi.push_back(rect);

cv::Rect rect2 = cvRect(0, 0, img2_1.cols/2, img2_1.rows);
cv::vector<cv::Rect> roi2;
roi2.push_back(rect2);

cv::vector<cv::vector<cv::Rect>> rois;
rois.resize(2);
rois[0] = roi;
rois[1] = roi2;

cv::Mat imgOut;
cv::Stitcher::Status status = stitcher.stitch(images, rois, imgOut);

And I still get the same error.




Can you please tell me what's the problem?

2014-12-09 13:44:10 -0500 marked best answer OpenCV Stitching module for iOS

Hello,
I'm interested in using the stitching module in my iOS project and wanted to ask if there are any examples of code or some kind of tutorial for this module?

2014-12-09 13:41:19 -0500 marked best answer Stitcher module errors

I'm trying to use the OpenCV Stitcher Module on iOS to stitch two photos into one.

Here's my code:

- (UIImage *)stitchImages
{
    cv::Stitcher stitcher = cv::Stitcher::createDefault(TRUE);

    cv::Mat img1 = [openCVUtil cvMatWithImage:[UIImage imageNamed:@"1.jpg"]];
    cv::Mat img2 = [openCVUtil cvMatWithImage:[UIImage imageNamed:@"2.jpg"]];

    cv::vector<cv::Mat> images;
    images.push_back(img1);
    images.push_back(img2);

    cv::Mat imgOut;
    cv::Stitcher::Status status = stitcher.composePanorama(images, imgOut);
    if(status == cv::Stitcher::Status::OK) {
        UIImage *outImg = [openCVUtil imageWithCVMat:imgOut];
        return outImg;
    }
    return nil;
}


At the end I'm getting this error:

OpenCV Error: Assertion failed (imgs.size() == imgs_.size()) in composePanorama



Can you please explain what am I doing wrong?

2014-12-09 13:15:58 -0500 marked best answer Errors when trying to import the OpenCV framework into an iOS project

I downloaded the official version of iOS framework from opencv.org and then created a simple view-based Xcode project and added the opencv2.framework in the build phases menu.

After that I imported the opencv2.framework as:

#ifdef __cplusplus
#import <OpenCV/opencv2/opencv.hpp>
#endif

into the *.pch file.


I have this function:

static void testOpenCV
{
    cv::Mat m, gray;
}


When I ran the build - it showed me this error:

~/projects/xcode/openCV_ex/openCV_ex/ViewController.m:45:8: Expected expression


When I try to include the OpenCV framework before any definitions in my .pch file like this:

#import <opencv2/opencv.hpp>
#import <Availability.h>
….


I get this error:

~/Downloads/opencv2.framework/Versions/A/Headers/video/background_segm.hpp:48:1: Unknown type name 'namespace'



Can you please help me to find a solution to this problem?

2014-06-18 08:47:02 -0500 received badge  Popular Question (source)
2013-05-08 03:45:04 -0500 marked best answer Stitching module, finding features and match_conf

I am getting this error while trying to stitch two images:

Removed some images, because can't match them or there are too similar images: (2). Try to decrease --match_conf value and/or check if you're stitching duplicates. Need more images


The problem is that I can't find any public interface in the Stitching module to change this variable: '--match_conf'.

Can you please help me with this?

2012-11-01 01:10:41 -0500 asked a question Finding features in several threads

Hello,
From my experience it seems like OpenCV uses only one OS Core for all its calculations.

Can you please tell me if there is a way to make the "detect features" operation(for example) work in several threads on different cores?

2012-10-16 11:38:37 -0500 commented answer Decreasing the Stitching time

I have one more issue with "setCompositingResol" though.

I tried to change its value and when the value is "0.1" the stitching process is blazingly fast, but if I try to stitch the stitched photo that I get with this value with a regular photo, then "finding features" shows me an error no matter what's the value of "registrationResol". If I increase "compositingResol", then features can be found, but it requires much more time than it does with the default "compositingResol" value, so unfortunately I failed to decrease the stitching time here.

Maybe you have some ideas why this happens?

2012-10-16 11:29:27 -0500 commented answer Decreasing the Stitching time

Hello Sammy. Thank you very much for your answer, it's very useful.

setRegistrationResol helped me to significantly(x2.5) speed up the process of feature detection. I changed the default value to 0.1 and if the search of features is unsuccessful, then I'm automatically changing it to the highest value of 1.0.

2012-10-16 11:25:04 -0500 commented question Decreasing the Stitching time

Yes, I tried Orb, it works fast, but sometimes it finds it difficult to detect enough features. I came to a conclusion that by changing registrationResol I get much better results with the default feature detector.

2012-10-15 04:23:53 -0500 commented answer Stitching module's strange behavior (Part II)

Hello Alexey. Can you please help me with this related question? http://answers.opencv.org/question/3165/decreasing-the-stitching-time/

2012-10-15 04:21:39 -0500 asked a question Decreasing the Stitching time

Hello, I am using the Stitiching module and ROI to stitch two photos (968x1296 each).
On the iPhone 4 it takes more than 30 seconds (about 14 seconds to find features and 18 seconds to compose images + other calculations).

Can you please tell me if there is any way to speed up this process?

2012-10-04 01:54:09 -0500 commented answer Stitching module's strange behavior (Part I)

I posted a couple of related questions in this new thread: http://answers.opencv.org/question/2827/stitching-modules-strange-behavior-part-ii/

Can you please me with those?

2012-10-04 01:53:01 -0500 edited question Stitching module's strange behavior (Part I)

I came across a very strange behavior of the Stitching module.
Please read the whole question to understand the details.


I'm tried to stitch two images named "_pano1.jpg" and "4.jpg" with "match_conf = 0.3".

Here's the debugging output information:

Features in image #1: 853
Features in image #2: 370
Finding features, time: 0.696868 sec
Pairwise matching
1->2 matches: 44
1->2 & 2->1 matches: 67
.Pairwise matching, time: 0.0538355 sec
Removed some images, because can't match them or there are too similar images: (2).
Try to decrease --match_conf value and/or check if you're stitching duplicates.
Need more images


Then I tried to change the value to "match_conf = 0.1". Here's the output after that:

Finding features...
Features in image #1: 853
Features in image #2: 370
Finding features, time: 0.716048 sec
Pairwise matching
1->2 matches: 284
1->2 & 2->1 matches: 384
.Pairwise matching, time: 0.104792 sec
Removed some images, because can't match them or there are too similar images: (2).
Try to decrease --match_conf value and/or check if you're stitching duplicates.
Need more images


At the same time when I'm trying to stitch 3.jpg and 4.jpg with "match_conf = 0.3" the output looks like this:

Finding features...
Features in image #1: 441
Features in image #2: 474
Finding features, time: 0.697173 sec
Pairwise matching
1->2 matches: 62
1->2 & 2->1 matches: 90


So the module stitches them successfully, but if I try to change the value to "match_conf = 0.1", then it tells me that it can't stitch 3.jpg and 4.jpg:

Finding features...
Features in image #1: 441
Features in image #2: 474
Finding features, time: 0.780463 sec
Pairwise matching
1->2 matches: 198
1->2 & 2->1 matches: 335
.Pairwise matching, time: 0.103776 sec
Removed some images, because can't match them or there are too similar images: (2).
Try to decrease --match_conf value and/or check if you're stitching duplicates.
Need more images



~ Why _pano1.jpg and 4.jpg don't stitch no matter what's the value of "match_conf" even though 3.jpg and _pano1.jpg basically contain the same information for stitching with 4.jpg?

~ Why it tells me that it can't stitch 3.jpg and 4.jpg after lowering the "match_conf" value?

2012-10-03 04:28:35 -0500 asked a question Stitching module's strange behavior (Part II)

This question is directly connected to the original one here.

Internally, in the stitching module it's assumed that an input image is a photo from camera (it's used for camera parameters estimation). The assumption is false for _pano1.jpg as it's not a photo, but a few transformed images stitched.

My app's algorithm has to look more or less like this: When a user makes photos while rotating the iPhone these photos should be stitched in the background one after another and this means that there will inevitably be a situation when I'll have to pass the already stitched photos to the Stitching module.

Can you please tell me how this can be implemented if the Stitching module stitches only the original photos from camera?
I just can't afford to wait until the user of my app will make all the photos and only after that stitch them all at once because it can take more than 5 minutes on the iPhone 4 and no one will be willing to wait this long, they'll just quit the app and delete it.

And there's another thing. This _pano2.jpg image is the result of stitching of the first two images(so it's not the original photo from camera). Despite this I can stitch it with 3.jpg and the result of this operation can be seen on _pano1.jpg. I guess this proves that the Stitching module is capable of stitching modified photos as well and not only the original ones.
Why in this case it doesn't want to stitch _pano1.jpg and 4.jpg?


Too low match confidence means that all (or almost) matches will be classified as good ones. But when all matches are good the method decides that the images are too similar and doesn't stitch them. It's reasonable, for instance, when a camera is fixed and there is a small moving object. In such case almost all matches are good, but it's better not to stitch two images, and take only one as output.

This is one more thing that I simply can't understand.
Why it tells me that the images are too similar when I'm trying to stitch _pano1.jpg and 4.jpg even though there's a 20-degree difference in camera movement on 4.jpg relatively to 3.jpg(that is a part of _pano1.jpg) and there certainly is some information out there that the 3.jpg lacks(for example the door and the other part of the wall)?



Thanks again for your time Alexey. I really appreciate it.

2012-10-03 01:52:52 -0500 edited answer Stitching module's strange behavior (Part I)

Internally, in the stitching module it's assumed that an input image is a photo from camera (it's used for camera parameters estimation). The assumption is false for _pano1.jpg as it's not a photo, but a few transformed images stitched.

My program's algorithm has to look more or less like this: When a user makes photos while rotating the iPhone these photos should be stitched in the background one after another and this means that there will inevitably be a situation when I'll have to pass the already stitched photos to the Stitching module.

Can you please tell me how this can be implemented if the Stitching module stitches only the original photos from camera?
I just can't afford to wait until the user of my app will make all the photos and only after that stitch them all at once because it can take more than 5 minutes on the iPhone 4 and no one will be willing to wait this long, they'll just quit the app and delete it.

And there's another thing. This _pano2.jpg image is the result of stitching of the first two images(so it's not the original photo from camera). Despite this I can stitch it with 3.jpg and the result of this operation can be seen on _pano1.jpg. I guess this proves that the Stitching module is capable of stitching modified photos as well and not only the original ones.
Why in this case it doesn't want to stitch _pano1.jpg and 4.jpg?


Too low match confidence means that all (or almost) matches will be classified as good ones. But when all matches are good the method decides that the images are too similar and doesn't stitch them. It's reasonable, for instance, when a camera is fixed and there is a small moving object. In such case almost all matches are good, but it's better not to stitch two images, and take only one as output.

This is one more thing that I simply can't understand.
Why it tells me that the images are too similar when I'm trying to stitch _pano1.jpg and 4.jpg even though there's a 20-degree difference in camera movement on 4.jpg relatively to 3.jpg(that is a part of _pano1.jpg) and there certainly is some information out there that the 3.jpg lacks(for example the door and the other part of the wall)?



Thanks again for your time Alexey. I really appreciate it.




P.S. To moderators: Sorry for not posting this question as a comment, I wanted to do it, but unfortunately the comment field has a character limitation and this question is related directly to the original question in the thread, so I also couldn't post it as an absolutely new one. This is why it has been posted as an answer.