Ask Your Question

gary's profile - activity

2021-03-28 16:17:10 -0600 received badge  Popular Question (source)
2013-01-14 00:06:13 -0600 asked a question Using GaussianMotionFilter

Hi, trying to use a very simple test with GaussianMotionFilter in Xcode/iOS:

vector<Mat> motions;
for (int i = 0; i < 12; i++) {
    Mat mat = [self grayscaleMatWithPath:path withSize:CGSizeMake(320, 480)];
    motions.push_back(mat);
}
GaussianMotionFilter *filter = new GaussianMotionFilter();
filter->setParams(1, 0.0f);
pair<int, int> range(0, (int)motions.size() - 1);
filter->stabilize(0, motions, range);

but getting the following error:

OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op, file opencv/modules/core/src/arithm.cpp, line 1273

Has anyone used this or MotionStabilizationPipeline with any success?

Thanks!

2013-01-13 04:07:17 -0600 commented answer Removing outliers from goodFeaturesToTrack using the x84 method

Alright, thanks.

2013-01-11 15:54:28 -0600 asked a question Using the resulting int vector from selectPairs in FREAK

Hi, I'm trying to use FREAK as an image stabilizer. I have an array of greyscale Mats, and I'm calling selectPairs on them in an attempt to get the best possible matching keypoints.

What do I do now that I have these int vector pairs? I'm not sure I understand what the indexes in this pairs vector correlate to. The values range up to 500 and up. They don't seem to match indexes of any vectors that I have, eg my points vector, etc. Here's a code snippet:

vector<Mat> mats;
vector<vector<KeyPoint> points;

GFTTDetector *detector = new GFTTDetector();
for (int i = 0; i < imageCount; i++) {

    // I get curMat here ...

    mats.push_back(curMat);

    vector<KeyPoint> curKeyPoints;
    detector->detect(curMat, curKeyPoints);
    points.push_back(curKeyPoints);
}

FREAK *freak = new FREAK(true, true, 22.0f, 4);
vector<int> pairs = freak->selectPairs(mats, keypoints, 0.8, false);

... ?

//(now what?)

Thank you!

2013-01-10 13:07:07 -0600 commented answer Removing outliers from goodFeaturesToTrack using the x84 method

Thanks so much for the info, and nice job on the OpenCV logo! That sounds like it makes sense, but unfortunately I have no idea where to even start on the OpenCV side of things. I was hoping for some kind of X_84 flag/parameter in the TranslationBasedLocalOutlierRejector class. Say I have an array of images, along with corresponding prevPoints and nextPoints from calcOpticalFlowPyrLK. How would I go about computing the MADs and the gaussian distribution, etc. and in turn removing these outliers? Thanks again.

2013-01-10 10:15:15 -0600 asked a question Removing outliers from goodFeaturesToTrack using the x84 method

Hi, I found this paper that describes what looks to be a pretty great outlier removal method. Unfortunately the formulas are a bit over my head.

Does know if this x84 method is implemented in OpenCV? I have some occluded keypoints due to "foreground" motion (similar to how the road sign is occluded by the teddy bear in that PDF) and I need to discard them.

Any help would be much appreciated!

2013-01-05 10:13:23 -0600 asked a question Any way to speed up OnePassStabilizer?

I'm playing around with OnePassStabilizer in the latest build of OpenCV in Xcode/iOS but it's crazy slow, about a little over 1 second per frame, even on the iOS Simulator on my Mac Pro.

Has anyone used this? Have any suggestions for configuring it properly/speeding it up?

A simple example:

OnePassStabilizer *stabilizer = new OnePassStabilizer();
stabilizer->setLog(new LogToStdout()); // doesn't seem to actually log btw
stabilizer->setRadius(12); // apparently this needs to be the total count of frames??
stabilizer->setFrameSource(source);

Mat stabilizedMat = stabilizer->nextFrame(); // makes it run/process all frames
2012-12-28 11:34:24 -0600 commented answer Multiply a vector<Point2f>

thank you!

2012-12-27 16:25:40 -0600 asked a question Multiply a vector<Point2f>

All of my points in my vector<point2f> are halved, so I'd like to double them. Is there a better way to do this other than looping through all the points and multiplying each x, y value by 2?

Thanks!

2012-12-27 16:20:20 -0600 received badge  Scholar (source)
2012-12-25 05:01:13 -0600 received badge  Student (source)
2012-12-22 15:32:20 -0600 asked a question are these links supposed to link to anything?

http://docs.opencv.org/trunk/modules/videostab/doc/introduction.html#t04

I'd really like to see an example of how to use this stabilization class. I'm using indicidual frames as opposed to video files however. Thanks!!

2012-12-08 16:10:17 -0600 asked a question Operating on a Mat pointer from within an Objective-C++ method

Hi, I have some CGContext drawing that I need to do before I work with my cv::Mat so I have an Objective-C++ NSObject that I set up to contain this work. I want to process my Mat via a pointer from an external containing class (a viewController). When I pass in my Mat pointer, my Objective-C++ class is somehow holding on to this pointer and causing the class to not be dealloc'ed! Has anyone done this before and could you tell me how you went about it? The ViewController will be the one to actually "own" the Mat, and MatProcessorClass should do some kind of minimal image processing (resize an image or whatever, not doing anything at all for now) and output it to the mat stored in the ViewController. Here's how I'm doing it:

// in ViewController.mm:
- (void)viewDidLoad:(BOOL)animated {
    ...
    myMat = cv::Mat(640, 480, CV_8UC4);
    MatProcessorClass *c = [[MatProcessorClass alloc] initWithUIImage:someImage];
    [c processImageIntoMat:&myMat];
    NSLog(@"mat ref count: %i", myMat.refcount); // it's 1
    [c release]; // dealloc is NOT called on the MatProcessorClass!
}

// in MatProcessorClass.mm:
- (void)processImageIntoMat:(cv::Mat*)vcMat {
    NSLog(@"mat ref count: %i", vcMat->refcount); // it's 1
    // not even doing anything here for now
}

I even tried doing a release() in both the processor mat class and my view controller, and it's still not dealloc'ed. Any help would be much appreciated! Thanks!

2012-12-05 02:46:50 -0600 commented question iOS: goodFeaturesToTrack, calcOpticalFlowPyrLK, findHomography 6 times slower in global queue than on main thread

Thanks for the questions. Yes, that's right, no not doing anything UI-related, I'm aware they must be done on the main thread. I took a look at the gsoc2012 "findHomography" demo code and it's so slow. Surely someone must have tried speeding this up using parallel_for before or something similar? Edit: Actually, the only thing I can think of is cv::Mat to UIImage conversion... but I think that uses thread-safe context drawing to do the conversion.

2012-12-03 16:34:03 -0600 commented question iOS: goodFeaturesToTrack, calcOpticalFlowPyrLK, findHomography 6 times slower in global queue than on main thread

@uvalh sure just updated with the call

2012-12-03 05:03:18 -0600 answered a question Build OpenCV for iOS 5

Do you absolutely need to build it? Have you tried just downloading the pre-build framework instead?

I too messed around with building it from source before I realized they already have a pre-built version of the framework available for iOS:

http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/2.4.3/opencv2.framework.zip/download

I'm using it with the iOS 6 SDK on my iPhone 4 with iOS 5.1.1 on it.

2012-12-03 04:55:02 -0600 received badge  Editor (source)
2012-12-03 04:54:03 -0600 asked a question iOS: goodFeaturesToTrack, calcOpticalFlowPyrLK, findHomography 6 times slower in global queue than on main thread

I'm doing a pretty straightforward flow here: goodFeaturesToTrack, calcOpticalFlowPyrLK, findHomography on two 500 x 500px JPGs using the iOS version of the OpenCV 2.4.3 SDK.

On my iPhone 4 with iOS 5.1.1 on it, this entire process takes a little over 1 second to finish while running on the main thread. When I move it to the global queue using GCD, it takes over 6 seconds!! On my iPhone 5 with iOS 6.0 on it, the process takes about 1.5-2 seconds in the background/global queue and about 0.2- 0.3 sec on the main thread.

So, it looks like these functions aren't thread safe. Is it possible to use them with parallel_for, or are there updated methods I can use instead that already use parallel_for behind the scenes?

Any input would be much appreciated. I will definitely accept an answer! I'm new here but not new to the stackoverflow system. :) Thanks!

Edit: just using a standard dispatch async call:

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
  [self findHomography];
});