cv::Mat to UIImage making image grayscale
Whenever I convert a cv::Mat
to a UIImage
I get a grayscale image back, which is not what is in the original cv::Mat
.
Here is the original cv::Mat
:
After converting to UIImage
:
Code for converting to UIImage
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
Can anyone point out to me why when I convert it to a UIImage
that I get back a gray scale image? I need it in a UIImage
because I am sending it to tesseract, and the gray scale image does not give optimal OCR results.
why are you using "true color" conversions that then depend on your monitor settings?
This is just the generic code I got from OpenCV's documentation for converting from
mat
toUIImage
. I honestly don't know much about it, I was hoping someone else had come across this problem and had found a solution.Your second image contains data that does not exist in the first one. This means that either you are displaying the first one as a BW image or your second one is the first one.