Ask Your Question

transform 16bit grayscale to RGB with constants

asked 2017-03-03 07:53:55 -0600

pmh4514 gravatar image

updated 2017-03-03 07:55:32 -0600


I'm trying to figure out the most appropriate way to do this in OpenCV but I don't have much experience with it and feel like there should be a better way than just brute force.

I'm staring with a 16bit grayscale image (CV_16UC1) and I want to produce a RGB image (CV_8UC3)

But I don't want to use a simple conversion like o1.convertTo(o2, CV_8UC3, 1/255.0);

Rather, I have 3 constant scaling factors (each between 0 and 1 corresponding to each of R,G and B) Each resulting RGB pixel value should be the original 16bit grayscale pixel value multiplied by one of the three constants and then scaled down to a value between 0-255.

Thoughts? Thanks!

edit retag flag offensive close merge delete


For that you will have to loop over image pixels, apply your manual conversion rule and push back the result in the correct matrix. Also how is your 16 single channel image containing 3 channel color information? o_o

StevenPuttemans gravatar imageStevenPuttemans ( 2017-03-03 08:27:51 -0600 )edit

so "brute force" (ie. looping through each pixel) is the way? (there's no "matrix" style multiplication I can do directly?)

to your question, the 16 single channel image does not contain color information, only grayscale pixel intensities. I am working on a "false coloring" so to speak. I have 3 constant values (all between 0-1) one for each of R, G and B, which when multiplied by the original grayscale value and scaled to 0-255, becomes the R, G and B values.

pmh4514 gravatar imagepmh4514 ( 2017-03-03 08:40:56 -0600 )edit

Well could you describe how your R G and B pixels are allocated inside the 16bit data? Then we might be able to figure out a smarter way.

StevenPuttemans gravatar imageStevenPuttemans ( 2017-03-03 08:47:24 -0600 )edit

The 16bit data is a purely grayscale image. I'm really just doing a "false coloring" by generating RGB values.

I've written this (simplified for space) which I think should get me there.

Mat mat_raw_16;  // pre-populated source grayscale   
Mat mat_color_8 = Mat(rows, cols, CV_8UC3);   // container for false-color version

for (int i=0; i<mat_raw_16.rows; i++){
    (for int j=0; j<mat_raw_16.cols; j++){
        ushort val =<ushort>(i,j);<cv::Vec3b>(i,j)[0] = (val * CONST_R)*(val/255.0)<cv::Vec3b>(i,j)[1] = (val * CONST_G)*(val/255.0)<cv::Vec3b>(i,j)[2] = (val * CONST_B)*(val/255.0)

Each of those CONST values are scaling factors (between 0 and 1)

pmh4514 gravatar imagepmh4514 ( 2017-03-03 09:02:12 -0600 )edit

Ok it is false coloring but at the end I think you will have an image with only 256 differents colors?

in opencv there is no function to transform a basic 8 bits image in color

LBerger gravatar imageLBerger ( 2017-03-03 09:06:48 -0600 )edit

Thank you both for responses!

pmh4514 gravatar imagepmh4514 ( 2017-03-03 13:16:01 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2017-03-03 08:58:10 -0600

Guyygarty gravatar image

If I understand correctly, you have a 16 bit image, I, and want to "tint it" so that R= a*I/255, G=b*I/255, G=c*I/255. What you could do is to use:

   I.convertTo(R, CV_8UC1, a/255, 0);
   I.convertTo(G, CV_8UC1, b/255, 0);
   I.convertTo(B, CV_8UC1, c/255, 0);

and then merge R,G, B into a color image

std::vector<cv::Mat> array_to_merge;
cv::merge(array_to_merge, Dest);

Do not use pixel operations if you can avoid them.


edit flag offensive delete link more



Thanks! Yes, you understand correctly.. See my comments to the above responder, where I showed my "brute force" way looping through pixels..

pmh4514 gravatar imagepmh4514 ( 2017-03-03 09:05:16 -0600 )edit

Question Tools

1 follower


Asked: 2017-03-03 07:53:55 -0600

Seen: 3,860 times

Last updated: Mar 03 '17