# is there a better way to recover 16UC bit values?

Hi guys, let's say that I have a CV_16UC1 matrix with 16bit unsigned short values which it needs to be transformed to CV_8UC1 for now no any further process in between and then back to CV_16UC1. The way I am doing it at the moment is:

    // img is the 16bit image
double minval,maxval;
minMaxLoc(img, &minval, &maxval, NULL, NULL);

img.convertTo(img, CV_8UC1, 255.0 / maxval);

img.convertTo(img, CV_16UC1, maxval / 255.0);


However, the values in the new 16bit matrix slightly differ from the original ones. My question is there a better way to recover the original values as it was (I guess no, but I need to ask :-p), or at least how I can decrease the difference between the original and new values to the minimum as possible?

edit retag close merge delete

1

I'm not sure to understand but may be like this

double minval,maxval;
minMaxLoc(img, &minval, &maxval, NULL, NULL);

img.convertTo(img, CV_8UC1, 255.0 / (maxval-minval),-255*minval/(maxval-minval));

img.convertTo(img, CV_16UC1, (maxval-minval) / 255.0,minval);


Mya be at the end an error about max-minval/255

( 2016-02-03 09:18:56 -0500 )edit

You mean unsigned short, not floats, right? CV_16UC means 16 bit unsigned, which is a 16 bit integer.

( 2016-02-03 09:33:07 -0500 )edit

@LBerger thanks for the response. Your solution did not change that much, I got more or less the same result. What do you mean by "Mya be at the end an error about max-minval/255"

( 2016-02-03 09:35:53 -0500 )edit

@Pedro Batista yes you are right, my bad. (fixed)

( 2016-02-03 09:38:24 -0500 )edit

I don't get something. Do you need to do some operation on the Mat before changing it back to the 16bit? Why don't you just convert the first 16bit image into a new Mat and keep a copy of both alive? (Because normally, converting and image to 8bit is used to display it).

( 2016-02-03 11:09:24 -0500 )edit

@Pedro Batista in order to explain a bit better. I have this 16bit pixel values image (it is actually a depth image) with some blank spots (pixel value = 0) which I would like to smooth, something like this one. However, this demands to use inpaint() which only accepts 8-bit images as input. Therefore, I have to convert my 16-bit image to 8-bit image but afterwards I would like to recover my original 16-bit depth image values. I think what you and @Guyygarty propose, by keeping a copy of it and then just replace the blank values will do the trick. Thanks ;-).

( 2016-02-03 12:41:11 -0500 )edit

btw, there is an inpaint version in xphoto, which works with float or ushort input images

( 2016-02-03 23:20:06 -0500 )edit

@berak many thanks I did not know about it. I will try it as soon as possible. Have a look also to this thread that I opened regarding inpaint().

( 2016-02-04 04:41:09 -0500 )edit

Offtopic question: is that inpaint() function fast? Can you use it at real-time framerate to fix your raw data?

( 2016-02-04 06:11:27 -0500 )edit

@Pedro Batista supposedly not. For that reason if you check also the link with the initial code they first scale down the image, apply inpaint() and then resize it back to the initial size. This is how they get real time framerate. For my case at the moment I am not concerned about speed since I have the data already, and I just need to process them. Later on, I will need to think about it as well.

( 2016-02-04 06:46:38 -0500 )edit

Sort by ยป oldest newest most voted

If I understand correctly, you have an image with 16 bit pixel values and you want to convert to 8 bit and back without losing precision. You cannot do this, as in converting to 8 bits you are throwing away 8 bits worth of data. Depending on the value of maxval some of them would be expected to hold some information.

Let me demonstrate:

Say the first 3 pixels in an image have values of 256,257 and 258 and maxval is 1020. The corresponding three pixels in the 8 bit image would be 64,64 and 64... Converting back to 16 bits you would get 256, 256 and 256.

You could overcome this problem by keeping a second image with the "remainders" and then adding it to the new 16 bit image.

guy

more

@Guyygarty thanks for the explanation and the suggestion. Indeed what you propose I think that it will do the trick ;-).

( 2016-02-03 12:42:29 -0500 )edit

@Guyygarty has right! with integer division if C = A /B then A = C * B + A % B

As suggested by @LBerger I would suggest to reduce the effect of the error. Take a look at histogram. Scaling from 16bit to 8bit you collapse N bins into 1 bin. Where N=maxval/256. Going back to 16bit your histogram will have again 256 bins but separated by a gap of N bins.... get lost maxval-256 bins !

To reduce the error you could try to reduce N. What if you have just 1 pixels=maxval but all others are really lower ? Why to scale from 0 to maxval even if you don't have pixels at 0 ?

For example you could use cumulative distribution to select a good range for scaling. N=(maxgood-mingood)/256 where maxgood is the value greater than 99% of your pixels and mingood is lower that 1% of your pixels. Yes, pixels below mingood will become 0, pixels upper than maxgood will become 255 but almost 98% of your pixels will be scaled using more accurate step.

Is the same of histogram clipping in mine BrightnessAndContrastAuto. You could adapt it for 16 bit images than get resulting maxvalue and proceed as your code.

more

@plab thanks for the response. I did not thought get the part about using a cumulative distribution to select a good range for scaling. How this differs from @LBerger 's code. Can you show some code snippet. Thanks again ;-).

( 2016-02-03 13:10:33 -0500 )edit
1

@StevenPuttemans there should be the option to accept two answers as correct answer. For example now I would like to accept @pklab 's answer as correct as well as @Guyygarty 's answer but there is not this oportunity :-(.

( 2016-02-03 13:15:28 -0500 )edit

Il'll try to explain better even if my english is poor... minval is min(img), maxval is max(img). mingood is percentile at x% and maxgood is percentile at 100-x%. Because (maxgood-mingood) <= (maxvall-minval) you will lost less bins. You should use histogram cumulative distribution to calculate the percentile, the code is at BrightnessAndContrastAuto link.

( 2016-02-03 13:25:28 -0500 )edit
1

( 2016-02-03 13:29:21 -0500 )edit

ok now I got it (I think). You mean to use the BrightnessAndContrastAuto() function from your other answer but modify it in order to accept and process 16bit images and on the output image to get the maxvalue and then use it as I do now, right?

( 2016-02-03 14:32:20 -0500 )edit

Yeah up till now no option to accept 2 answers :D we will have to live with it I am afraid. You can report this at the issues page on github for this forum!

( 2016-02-04 04:04:44 -0500 )edit
1

@StevenPuttemans it is quite pity, because indeed some times the answer from two different users could be equally correct and informative and they should both get the "thumbs up". I will make a request at the link you posted, and lets see if we will get any response. Thanks ;-)

( 2016-02-04 04:47:51 -0500 )edit

@theodore Output range from BrightnessAndContrastAuto is always 0..2^bit_depth. You should use same code used in BrightnessAndContrastAuto to calculate minGray and maxGray clipping histogram at small%. Use them to calculate your scale factor. Be careful: solving asserts related to CV_8U you should take care of histogram range too. BrightnessAndContrastAuto uses range=0..256 to calculate the histogram right for 8bit images. With 16bit you should use range=0..2^16.

( 2016-02-04 11:17:46 -0500 )edit

@pklab I see thanks. I will let you know if I have any other question.

( 2016-02-04 11:24:29 -0500 )edit

Sorry for delay. Another way to reduce error is to used a non linear conversion. For example when RGB image are converted in a 8 bit color image LUT using an error minimization.With this technique you can improve quality but you have to change all image operators (smoothing, gradient...)

( 2016-02-05 01:41:52 -0500 )edit

Official site

GitHub

Wiki

Documentation