Loading raw array of 16-bit integers as grayscale image

asked 2018-05-16 04:44:10 -0600

Hello. I have array of 16-bit values, that represents grayscale image (16 bit for every pixel). But when I try to load it as:

cv::Mat(height, width, CV_16UC1, pixelArray)

I got mesh in the output. Only way to get somesing looks like valid image - swap high and low bytes of all pixel values to big-endian and load them as:

cv::Mat(height, width, CV_16SC1, pixelArray)

But it still gives lot of trash. Also, when I manually convert this 16-bit grayscales to 32-bit RGB-like values, it loads correctly, but have huge data loss.

How to load them correctly?

edit retag flag offensive close merge delete

Comments

1

Sounds like you may have a data representation misunderstanding. Said another way, you may have a signed/unsigned and endianness representation issue:

  • how is pixelArray defined?
  • Is your architecture big or little endian?
  • How do you fill the array?
  • a longer bit of sample code above, and a small array worth of data (say 2 columns, 2 rows) could improve your question or maybe help you find an answer.
  • What do the contents of the array look like in memory? Do a one byte at a time memory dump of a small region.
  • Is the data layout the expected endianness and sign?
opalmirror gravatar imageopalmirror ( 2018-05-16 14:48:05 -0600 )edit

Something like this:

unsigned short d[4] = { 0x1, 0x2, 0xfff3, 0xfff4 };
const unsigned char *c = (const unsigned char *) d;

printf("c:");
for (int i = 0; i < sizeof(d); i++)
{
    printf(" %02x", c[i]);
}
printf("\n");

cv::Mat a(2, 2, CV_16UC1, d);

printf("a:");
for (int r = 0; r < a.rows; r++)
{
    for (int c = 0; c < a.cols; c++)
    {
        printf("  %04x", a.at < unsigned short >(r, c));
    }
}
printf("\n");
opalmirror gravatar imageopalmirror ( 2018-05-16 14:50:21 -0600 )edit