Ask Your Question

How to convert CMSampleBufferRef to IplImage (iOS)

asked 2013-04-02 09:52:00 -0600

paipeng gravatar image

Hello all,

I have found an article to my question:

but by testing I get only an unusable image, after I convert the IplImage back to UIImage. (the image is shifted from top to bottom (from left to right). The convert from IplImage to UIImage should be no problem (tested).

Could anyone tell me, how to make it work?

Thanks a log



edit retag flag offensive close merge delete


I am using "CMSampleBufferRef" from video preview buffer. not the captured image. make it different?

paipeng gravatar imagepaipeng ( 2013-04-04 11:33:18 -0600 )edit

3 answers

Sort by ยป oldest newest most voted

answered 2013-04-04 05:50:25 -0600

thomket gravatar image
edit flag offensive delete link more


You can convert IplImage to Mat with Mat(IplImage*) constructor.

AlexanderShishkov gravatar imageAlexanderShishkov ( 2013-04-04 06:07:05 -0600 )edit

Thanks, I will take a try.

paipeng gravatar imagepaipeng ( 2013-04-04 11:34:06 -0600 )edit

answered 2013-04-04 08:37:17 -0600

AlexanderShishkov gravatar image

The code for conversion UIImage and cv::Mat is available in this OpenCV tutorial:

It will be similar for IplImage, if you don't want to convert it to cv::Mat.

edit flag offensive delete link more


Hello Alexander, thanks for the answer. I have no problem for converting UIImage to IplImage and reverse. with cv::Mat from UIImage to IplImage also no problem. Only I want to try converting directly from CMSampleBufferRef to IplImage, but I got incorrect converted image.

Any idea with CMSampleBufferRef?



paipeng gravatar imagepaipeng ( 2013-04-04 11:31:16 -0600 )edit

answered 2013-04-11 05:29:02 -0600

ske gravatar image

updated 2013-04-11 05:29:23 -0600

You can convert it to cv::Mat then convert it to IplImage.

//convert from Core Media to Core Video
CVImageBufferRef imageBuffer =  CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(imageBuffer, 0);

size_t width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0);

size_t height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0);

size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);

// extract intensity channel directly

Pixel_8 *lumaBuffer = (Pixel_8*)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);

// render the luma buffer on the layer with CoreGraphics

// (create color space, create graphics context, render buffer)

CGColorSpaceRef grayColorSpace = CGColorSpaceCreateDeviceGray();

CGContextRef context = CGBitmapContextCreate(lumaBuffer, width, height, 8, bytesPerRow, grayColorSpace, kCGImageAlphaNone);

// delegate image processing to the delegate const vImage_Buffer image = {lumaBuffer, height, width, bytesPerRow};

cv::Mat grayImage((int)imagebuf.height, (int)imagebuf.width, CV_8U,, imagebuf.rowBytes); 
IplImage* img = new IplImage(grayImage);
edit flag offensive delete link more

Question Tools


Asked: 2013-04-02 09:52:00 -0600

Seen: 2,712 times

Last updated: Apr 11 '13