First time here? Check out the FAQ!

Ask Your Question
1

LookUp Table for 16-Bit Images

asked Nov 24 '14

updated Nov 25 '14

Hi all!

I'm having an 16-Bit Gray scale Image. I want to reduce the Grey values of the pixels in it. I tried to use LUT but it seems that it can only work for 8-Bit Images. What is the efficient way to reduce a matrix through LUT? Any help is appreciated!

Preview: (hide)

Comments

I guess your option is either to convert to 8 bit first or implement the function of 18 bit. Afaik it doesn't exist yet but it should be quite similar to the 8 bit case. As to the answers, they will appear, the accept page is currently bugged. Devs are fixing it and putting the hosting somewhere else.

StevenPuttemans gravatar imageStevenPuttemans (Nov 26 '14)edit

2 answers

Sort by » oldest newest most voted
3

answered Dec 1 '14

Thank you for helping me to solve this problem! Here is my code for 16-Bit Look up table based reduction. hope this might be useful for someone!

main()
{
    Size Img_Size(320,240);
    Mat Img_Source_16(Size(320,240),CV_16UC1,Scalar::all(0));
    Mat Img_Destination_16(Size(320,240),CV_16UC1,Scalar::all(0));

    unsigned short LookupTable[4096];
    for (int i = 0; i < 4096; i++)
    {
        LookupTable[i]= 4096-i;
    }

    int i=0;
    for (int Row = 0; Row < Img_Size.height; Row++)
    {
        for (int Col = 0; Col < Img_Size.width; Col++)
        {
            Img_Source_16.at<short>(Row,Col)= i;
            i++;
            if(i>=4095)
                i=0;
        }
    }

    imshow("Img_Source",Img_Source_16);

    t1.start();
    Img_Destination_16= ScanImageAndReduceC_16UC1(Img_Source_16.clone(),LookupTable);

    imshow("Img_Destination",Img_Destination_16);
    t1.stop();
}

Mat& ScanImageAndReduceC_16UC1(Mat& I, const unsigned short* const table)
{
    // accept only char type matrices
    CV_Assert(I.depth() != sizeof(uchar));

    int channels = I.channels();

    int nRows = I.rows;
    int nCols = I.cols * channels;

    if (I.isContinuous())
    {
        nCols *= nRows;
        nRows = 1;
    }

    int i,j;
    unsigned short* p = (unsigned short*)I.data;
    for( unsigned int i =0; i < nCols*nRows; ++i)
        *p++ = table[*p];

    return I;
}
Preview: (hide)

Comments

Nice one :) Might up to making a PR and integrating this in OpenCV?

StevenPuttemans gravatar imageStevenPuttemans (Dec 1 '14)edit

Sure! it'll be my pleasure!

Balaji R gravatar imageBalaji R (Dec 1 '14)edit
0

answered Nov 25 '14

kbarni gravatar image

It's very easy to implement a custom function for LUT coloring.

See my answer in this topic: http://answers.opencv.org/question/50781/false-coloring-of-grayscale-image/

In short: you create a RGB lookup table of desired length (65536 in this case), then for each gray pixel P to get the false colored pixel C:

C[0]=LUT[P][0];
C[1]=LUT[P][1];
C[2]=LUT[P][2];
Preview: (hide)

Question Tools

Stats

Asked: Nov 24 '14

Seen: 5,855 times

Last updated: Dec 01 '14