I have some project that require me to get the most accurate data so that I can perform simple image thresholding to segments the bone area only. The DICOM library I use is DCMTK, and here is the way I did it:
Mat image = Mat(int(mono->getHeight()), int(mono->getWidth()), CV_16UC1, (Uint16 )(mono->getOutputData(16/ bits per sample */)));
Strangly, if the depth is 13, the image I get is pretty good, but when the image depth is 17, the image contrast is getting low.
here is the image of the 13bits depth I get
and here is the image of 17bits
how can I get the same result as the 13bits image from the 17bits image? In my current method, after the matrix has been saved, I turn the pixel value range into range 0-1 first, and then I normalize it by maximinzing the pixel range (sometimes the pixel value is 0.4=0.6 so I maximize it using linear interpolation to range 0-1). Sometimes this method is working, but since I use a static threshold value to recognize the bone, in many cases, this method failed to give the proper bone value.