# Barrel distortion with LUT from PNG

Hi.

I would like to apply Barrel Distortion to a camera stream to display the video on a Zeiss VR One. In the vrone unity3d sdk I found lookup tables for the distortion as png files and was very happy since I did not expect to find something like that. I would like to use them for the barrel distortion since LUT is fast and I think this would be the best to get this working in real time.

My problem is that I have no idea on how to use the tables. There are 6 tables, one for every color and for the X and Y axes. Does anybody have an idea how I can read the files and apply the distortion? This is the .png file for the blue color on the X axis:

I tried the following code, as suggested by Tetragramm:

Mat prepareLUT(char* filename){
Mat first;
Mat floatmat;
first.convertTo(floatmat, CV_32F);
Mat* channels = (Mat*)malloc(3*sizeof(Mat));
split(floatmat, channels);
Mat res(Size(960,1080), CV_32FC1);
res = channels[0] + channels[1]/255.0 + channels[2]/65025.0;
//scaleAdd(channels[1], 1.0/255, channels[0], res);
//scaleAdd(channels[2], 1.0/65025, res, res);
free(channels);
return res;
}


I do this for all 6 LUTs, and then I split my image and apply the remap like this:

std::vector<Mat> channels(3);
split(big, channels);

std::vector<Mat> remapped;

Mat m1;
remap(channels[0], m1, data->lut_xb, data->lut_yb, INTER_LINEAR);
remapped.push_back(m1);
Mat m2;
remap(channels[1], m2, data->lut_xg, data->lut_yg, INTER_LINEAR);
remapped.push_back(m2);
Mat m3;
remap(channels[2], m3, data->lut_xr, data->lut_yr, INTER_LINEAR);
remapped.push_back(m3);

Mat merged;
merge(remapped, merged);


But the image does not turn out right. This is supposed to be an image of my hand:

Any idea what might be wrong?

edit retag close merge delete

Sort by » oldest newest most voted

Thanks to Tetragramm, I was able to parse the png files correctly and apply the LUT to the images.

This is how I read the png files:

Mat prepareLUT(char* filename){
Mat first;
Mat floatmat;
first.convertTo(floatmat, CV_32F);
std::vector<Mat> channels(3);
split(floatmat, channels);
Mat res(Size(960,1080), CV_32FC1);
res = channels[2]/255 + channels[1]/(255.0*255.0) + channels[0]/(255.0*255.0*255.0);
return res;
}


This leaves me with Mats with values between 0 and 1.00393694733. The remap function expects values between 0 and width for the x-LUT and between 0 and height for the y-LUT. Therefore the LUTs for x have to be multiplied by the width and the LUTs for y have to be multiplied by the height, like this:

    sprintf(lutFile, "%s%s", getenv("EXTERNAL_STORAGE"), "/HeadsetAssets/LUT_XB.png");
data->lut_xb.create(Size(960, 1080), CV_32FC1);
data->lut_xb = prepareLUT(lutFile)*960;
sprintf(lutFile, "%s%s", getenv("EXTERNAL_STORAGE"), "/HeadsetAssets/LUT_XG.png");
data->lut_xg.create(Size(960, 1080), CV_32FC1);
data->lut_xg = prepareLUT(lutFile)*960;
sprintf(lutFile, "%s%s", getenv("EXTERNAL_STORAGE"), "/HeadsetAssets/LUT_XR.png");
data->lut_xr.create(Size(960, 1080), CV_32FC1);
data->lut_xr = prepareLUT(lutFile)*960;
sprintf(lutFile, "%s%s", getenv("EXTERNAL_STORAGE"), "/HeadsetAssets/LUT_YB.png");
data->lut_yb.create(Size(960, 1080), CV_32FC1);
data->lut_yb = prepareLUT(lutFile)*1080;
sprintf(lutFile, "%s%s", getenv("EXTERNAL_STORAGE"), "/HeadsetAssets/LUT_YG.png");
data->lut_yg.create(Size(960, 1080), CV_32FC1);
data->lut_yg = prepareLUT(lutFile)*1080;
sprintf(lutFile, "%s%s", getenv("EXTERNAL_STORAGE"), "/HeadsetAssets/LUT_YR.png");
data->lut_yr.create(Size(960, 1080), CV_32FC1);
data->lut_yr = prepareLUT(lutFile)*1080;


This gives me a data struct with 6 LUT-Mats (implementation of the data structure is trivial).

I then apply the remap function to every channel of my input:

    std::vector<Mat> channels(3);
split(inframe, channels);

std::vector<Mat> remapped;

Mat m1;
remap(channels[0], m1, data->lut_xr, data->lut_yr, INTER_LINEAR);
Mat m2;
remap(channels[1], m2, data->lut_xg, data->lut_yg, INTER_LINEAR);
Mat m3;
remap(channels[2], m3, data->lut_xb, data->lut_yb, INTER_LINEAR);
remapped.push_back(m1);
remapped.push_back(m2);
remapped.push_back(m3);

Mat merged;
merge(remapped, merged);


The Mat in merged is then the original image with the barrel distortion applied. As a sample I used a grid like this:

And the output looked like this:

If you look at the output through the Zeiss VR One, all lines are straight and the colors at the borders are just black and white as in the image. In the Google Cardboard apps you find in the play store, the colors get bad around the borders of the screen, I think the Google Cardboard SDK does not use a different distortion for every color channel like you get from the Zeiss VR One SDK.

more

You need to be using the remap function. It takes two tables, one for the x and one for the y, and you would apply it to each color separately.

Take a look at the basic tutorials to see how to do the simple things like reading in images and the basics of OpenCV. Here's the one for re-mapping images.

more

This looks very promissing, thanks for your answer. I will check it out as soon as I am home.

( 2016-04-22 07:26:50 -0500 )edit

Ok, the tutorial is great. Shouldn't be too hard to implement. The only thing left for me to do is to get to the mapping matrices from the .png files, but I found the following in the source code of the VR One Unity3d library:

// Decoding a color value from the
// texture into a float.  Similar to unitys
// DecodeFloatRGBA and DecodeFloatRG.
float DecodeFloatRGB(float3 rgb) {
return dot(rgb, float3(1.0,1.0/255.0,1.0/65025.0));
}


so I guess a simple convertTo(pngMat, CV_32FC1); should do, right?

( 2016-04-22 11:52:56 -0500 )edit

Maybe? I'm not sure what that snipped is supposed to be doing.

( 2016-04-22 15:16:30 -0500 )edit

Ok, I figured out what this is doing, and no, that is not how convertTo works. ConvertTo just takes the contents directly. So if the RGB image is (100,50,25) the float conversion will be (100.0, 50.0, 25.0)

Use the convertTo function to make it CV_32F, then the split, then the math functions. scaleAdd is the one you need.

( 2016-04-23 00:35:05 -0500 )edit

But CV_32FC1 only has one channel, so how could the result be (100.0, 50.0, 25.0)? scaleAdd seems perfect for this! I just have to read the .png, convert it to CV_32F, split it and then call scale add twice (once with 1/255 and once with 1/65025), leaving me with a single CV_32F Mat I can use for the remap function. Thanks a lot for your help! The only thing I still have to figure out is the order of the colors in the sniplet above, so wheter to multiply the red channel with 1/65025 or the blue one.

( 2016-04-23 04:38:07 -0500 )edit

convertTo just changes the type, not the number of channels, as it says in the documentation.

( 2016-04-23 10:48:48 -0500 )edit

Thanks for clarifying that!

I almost got this to work now, but something is still off. The image is not quite right, so I guess I have to play around a little with the channel order and I hope that will fix it.

However, I was shocked to see how the framerate dropped from 28fps to 1fps. I think this comes from all the rescaling and converting I have to do since my input buffer is in RGBA (the LUTs are in BGR) and in a different size than the LUTs. Would it be the same if I scaled the LUTs once I initialize them? And can I merge them into an RGB Mat, convert that into RGBA, split the channels and be left with 8 LUTs (instead of 6), one for each channel of RGBA and for x/y?

If that would result in the same, I would save ...(more)

( 2016-04-24 07:50:42 -0500 )edit

I cant figure out how to apply the remap, or I still read the LUTs wrong so I updated my question. Do you see any error in my code?

( 2016-04-24 10:52:37 -0500 )edit

Hmm. I don't know why, but I know what the error is, if that makes sense. The remap function uses the pixel locations so for a 1000x1000 image (for example) it could have a value of 1000 in the maps. Your highest possible value is 255, the way you are calculating it.

So there's probably some sort of normalization going on in the LUT.

Also, don't the LUT's remain constant with each frame? That would by my expectation, you initialize the maps once and use them over and over.

( 2016-04-24 20:20:37 -0500 )edit

Thats what I have been doing. The prepareLUT runs only once per LUT, only the last code runs at every frame. But I still have to convert the frame to RGB to apply the LUT, it would be great if I could somehow transform the LUT tables to RGBA, but I thought that through and I do not think such a transformation can be possible. Am I right?

I will try to divide the LUTs by 255 and multiply with the height/width, that should leave me with a valid LUT.

( 2016-04-25 01:30:12 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2016-04-21 07:12:36 -0500

Seen: 791 times

Last updated: Apr 25 '16