unpack 12 bit mono images

asked 2019-09-23 12:32:24 -0600

jjsword gravatar image

I'm trying to process images from a Hamamatsu Orca flash 4 camera using Opencv and having some trouble with 12 bit mono images. They are bit packed with 2 pixels in 3 bytes as described:

MONO12 Packed Format

This is output data format of DCAM_PIXELTYPE_MONO12. P is the pixel transmitted by the camera. B is the byte in buffer received data.

Buffer / Pixel Data: B[0] P[0] bits 11 ... 4, B[1] P[1] bits 3 ... 0 | P[0] bits 3 ... 0, B[2] P[1] bits 11 ... 4, ... ... B[m-2] P[n-1] bits 11 ... 4, B[m-1] P[n] bits 3 ... 0 | [n-1] bits 3 ... 0, B[m] P[n] bits 11 ... 4

The following code is supposed to unpack the image but am having trouble following and hope someone might help me understand a few things: first, why would you mix BYTE and WORD with char and ushort, second, why use a LUT to unpack the bits instead of transferring the buffer directly to 2 arrays containing char and ushort, and third, why is this not working as written. ie, isn't pImage a pointer to the unpacked 12 bit image? Maybe that pImage array of chars needs to be transferred to yet another array of ushort 2 bytes at a time first? Is there a more efficient way to do this?

void unpack_mono12_image( void* pSrcTop, int32 srcRowbytes, void* pDstTop, int32 dstRowbytes, int32 width, int32 height )
{
    WORD lut1[65536];
    WORD lut2[65536];
    // make lut to unpack MONO12
    int i, j;
    for( i=0; i<65536; i++ )
    {
        WORD w = (WORD)i;
        BYTE* p = (BYTE*)&w;

        lut1[i] = (p[0] << 4) + (p[1] & 0x0F);
        lut2[i] = (p[1] << 4) + ((p[0] & 0xF0) >> 4);
    }

    // unpack MONO12 and copy
    char* src = (char*)pSrcTop;
    char* dst = (char*)pDstTop;

    for( i=0; i<height; i++ )
    {
        WORD* pDst = (WORD*)(dst + dstRowbytes * i);
        BYTE* pSrc = (BYTE*)(src + srcRowbytes * i);
        for( j=0; j<width/2; j++ )
        {
            *pDst++ = lut1[*(WORD*)pSrc++];
            *pDst++ = lut2[*(WORD*)pSrc++];
            pSrc++;
        }
    }
}

...

int32 rowbytes      = bufframe.width * 2;
    int32 framebytes    = rowbytes * bufframe.height;
    char* pImage = new char[ framebytes ];
    memset( pImage, 0, framebytes );

    unpack_mono12_image( bufframe.buf, bufframe.rowbytes, pImage, rowbytes, bufframe.width, bufframe.height );
                     cv::Mat img(bufframe.width, bufframe.height, CV_16U, pImage);
                     ushort* pointer_to_data_start = img.ptr<ushort>();
                     memcpy(pointer_to_data_start, pImage, bufframe.height * rowbytes);
                     cv::imshow("image", img);
                     cv::waitKey(30);
    delete pImage;
edit retag flag offensive close merge delete

Comments

Well it seems that this code actually does work. I didn’t need to create a pointer or do memcpy so that is unnecessary. I’m assuming that making a LUT requires less memory so that is the logic behind doing it this way. Still not sure about word and byte mixed with char and ushort. I’ll try changing them all to char and ushort to see if it still functions the same.

jjsword gravatar imagejjsword ( 2019-09-30 17:49:11 -0600 )edit