Loading raw array of 16-bit integers as grayscale image
Hello. I have array of 16-bit values, that represents grayscale image (16 bit for every pixel). But when I try to load it as:
cv::Mat(height, width, CV_16UC1, pixelArray)
I got mesh in the output. Only way to get somesing looks like valid image - swap high and low bytes of all pixel values to big-endian and load them as:
cv::Mat(height, width, CV_16SC1, pixelArray)
But it still gives lot of trash. Also, when I manually convert this 16-bit grayscales to 32-bit RGB-like values, it loads correctly, but have huge data loss.
How to load them correctly?
Sounds like you may have a data representation misunderstanding. Said another way, you may have a signed/unsigned and endianness representation issue:
Something like this: