Ask Your Question

ske's profile - activity

2013-04-15 08:13:47 -0600 received badge  Necromancer (source)
2013-04-15 06:26:50 -0600 answered a question CvVideoCamera or not

Also you can use GSOC 2012 examples as a starting point. They are using AVFoundation directly instead of using CvVideoCamera as a wrapper.

http://www.code.opencv.org/projects/ios/repository/show/trunk

2013-04-12 03:22:33 -0600 received badge  Teacher (source)
2013-04-11 13:07:22 -0600 received badge  Necromancer (source)
2013-04-11 12:29:32 -0600 answered a question why is SurfFeatureDetector class (in Features2d) missing from the open-ios libary?

Try to add nonfree.hpp as a header file.

2013-04-11 12:24:58 -0600 answered a question ios framework not found opencv2

You need to add opencv2 framework to project frameworks folder. And it needs to be under the "Link binary with libraries" in build phases tab.

2013-04-11 12:18:52 -0600 answered a question iOS - access back camera error

For the second question you need to uncheck auto layout option in the storyboard.

2013-04-11 12:18:29 -0600 answered a question "_OBJC_CLASS_$_CvVideoCamera" not found

Did you change your viewcontroller file extension to ".mm"?

2013-04-11 05:58:09 -0600 commented question iOS6 + Opencv (Latest Compile) Linker Only for classes in Feature2D module

Did you change your c++ file extension to .mm?

2013-04-11 05:29:02 -0600 answered a question How to convert CMSampleBufferRef to IplImage (iOS)

You can convert it to cv::Mat then convert it to IplImage.

//convert from Core Media to Core Video
CVImageBufferRef imageBuffer =  CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(imageBuffer, 0);

size_t width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0);

size_t height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0);

size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);

// extract intensity channel directly

Pixel_8 *lumaBuffer = (Pixel_8*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);

// render the luma buffer on the layer with CoreGraphics

// (create color space, create graphics context, render buffer)

CGColorSpaceRef grayColorSpace = CGColorSpaceCreateDeviceGray();

CGContextRef context = CGBitmapContextCreate(lumaBuffer, width, height, 8, bytesPerRow, grayColorSpace, kCGImageAlphaNone);

// delegate image processing to the delegate const vImage_Buffer image = {lumaBuffer, height, width, bytesPerRow};

cv::Mat grayImage((int)imagebuf.height, (int)imagebuf.width, CV_8U, imagebuf.data, imagebuf.rowBytes); 
IplImage* img = new IplImage(grayImage);
2013-04-11 05:17:20 -0600 received badge  Editor (source)
2013-04-11 05:15:02 -0600 answered a question How to show UIImage in processImage:(Mat&)image method

Hi,

try to get main thread.

dispatch_sync(dispatch_get_main_queue(), ^{
    UIImage *imgSegnale = [UIImage imageNamed:@"testImage.png"];
    [self.imgViewSegnale setImage:imgSegnale];
    [self.view setNeedsDisplay];
});
2012-11-12 10:23:43 -0600 answered a question OrbFeatureDetector issue

The first parameter of ORBFeatureDetector constructor parameter is maximum number of features to retain. I think that is your mistake. Try to increase the value to 200-400.

2012-08-15 07:06:07 -0600 received badge  Organizer (source)
2012-08-15 07:05:12 -0600 asked a question gsoc2012 ios examples

I am trying to run gsoc 2012 example projects using official opencv 2.4.2 for ios version. Video streaming samples aren't working because of the include file named "CvVideoCamera.h". I searched the official opencv 2.4.2 source but there is no such class under the highgui module. Where can I find this file? Any suggestions will be much appreciated. Thanks.