# transform 16bit grayscale to RGB with constants

Hi,

I'm trying to figure out the most appropriate way to do this in OpenCV but I don't have much experience with it and feel like there should be a better way than just brute force.

I'm staring with a 16bit grayscale image (CV_16UC1) and I want to produce a RGB image (CV_8UC3)

But I don't want to use a simple conversion like o1.convertTo(o2, CV_8UC3, 1/255.0);

Rather, I have 3 constant scaling factors (each between 0 and 1 corresponding to each of R,G and B) Each resulting RGB pixel value should be the original 16bit grayscale pixel value multiplied by one of the three constants and then scaled down to a value between 0-255.

Thoughts? Thanks!

edit retag close merge delete

For that you will have to loop over image pixels, apply your manual conversion rule and push back the result in the correct matrix. Also how is your 16 single channel image containing 3 channel color information? o_o

so "brute force" (ie. looping through each pixel) is the way? (there's no "matrix" style multiplication I can do directly?)

to your question, the 16 single channel image does not contain color information, only grayscale pixel intensities. I am working on a "false coloring" so to speak. I have 3 constant values (all between 0-1) one for each of R, G and B, which when multiplied by the original grayscale value and scaled to 0-255, becomes the R, G and B values.

Well could you describe how your R G and B pixels are allocated inside the 16bit data? Then we might be able to figure out a smarter way.

The 16bit data is a purely grayscale image. I'm really just doing a "false coloring" by generating RGB values.

I've written this (simplified for space) which I think should get me there.

Mat mat_raw_16;  // pre-populated source grayscale
Mat mat_color_8 = Mat(rows, cols, CV_8UC3);   // container for false-color version

for (int i=0; i<mat_raw_16.rows; i++){
(for int j=0; j<mat_raw_16.cols; j++){
ushort val = mat_raw_16.at<ushort>(i,j);
mat_color_8.at<cv::Vec3b>(i,j) = (val * CONST_R)*(val/255.0)
mat_color_8.at<cv::Vec3b>(i,j) = (val * CONST_G)*(val/255.0)
mat_color_8.at<cv::Vec3b>(i,j) = (val * CONST_B)*(val/255.0)
}
}


Each of those CONST values are scaling factors (between 0 and 1)

Ok it is false coloring but at the end I think you will have an image with only 256 differents colors?

in opencv there is no function to transform a basic 8 bits image in color

Sort by » oldest newest most voted

If I understand correctly, you have a 16 bit image, I, and want to "tint it" so that R= a*I/255, G=b*I/255, G=c*I/255. What you could do is to use:

   I.convertTo(R, CV_8UC1, a/255, 0);
I.convertTo(G, CV_8UC1, b/255, 0);
I.convertTo(B, CV_8UC1, c/255, 0);


and then merge R,G, B into a color image

std::vector<cv::Mat> array_to_merge;
array_to_merge.push_back(B);
array_to_merge.push_back(G);
array_to_merge.push_back(R);
cv::merge(array_to_merge, Dest);


Do not use pixel operations if you can avoid them.

guy

more

1

Thanks! Yes, you understand correctly.. See my comments to the above responder, where I showed my "brute force" way looping through pixels..

Official site

GitHub

Wiki

Documentation