Ask Your Question
3

why is CV_32FC1 not normalized to 0-1?

asked 2017-12-06 19:25:13 -0600

svenevs gravatar image

I needed to call an external CUDA library function that assumes the input data is CV_32FC1. I'm working with data that comes in on a frame to frame basis, in RGBA, but at this point in time the data is already on the GPU. Instead of downloading an image to use OpenCV to convert, I figured I'd just do it myself. The most related question I could find was this SO post asking about the difference between CV_32F and CV_32FC1. The original function I wrote

/// will normalize to [0, 1] then use sRGB conversion to return single float gray
inline __host__ __device__
float rgbaToGray(const uchar4 &src) {
    static constexpr float Wr = 0.2126f;
    static constexpr float Wg = 0.7152f;
    static constexpr float Wb = 0.0722f;
    static constexpr float inv255 = 1.0f / 255.0f;

    float r = Wr * ((float) src.x) * inv255;
    float g = Wg * ((float) src.y) * inv255;
    float b = Wb * ((float) src.z) * inv255;

    return r + g + b;
}

as it turns out, dividing by 255.0f was the fatal flaw. What are the assumptions made about the CV_32FC1 data type? Typically when I think about floating point color values, I assume they are in the range [0, 1] (this is the expectation for OpenGL at least).

I couldn't really find any explicit documentation on what the value ranges are, do such expectations exist in OpenCV? For example, if I wanted to use CV_32FC3 for RGB values, do these values need to be in [0, 1]?

Thank you for any guidance on what the OpenCV assumptions about image values are. The main reason I was avoiding using OpenCV to do anything directly was because

  1. The data was already on the GPU, and I'm not assuming users of the code have the CUDA backend for OpenCV installed.
  2. My understanding is OpenCV demands column-major BGRA storage. My data comes in from the device as row-major RGBA.
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
5

answered 2017-12-06 21:33:26 -0600

Tetragramm gravatar image

OpenCV is Row-major, not column-major Stored Row1: BGRABGRA... Row2 BGRABGRA....

I don't see anything obviously wrong with your code. That will give you a CV_32F between 0 and 1.

If you mean, you go back to BGR later using an OpenCV function, then yes. OpenCV doesn't apply any scaling automatically. So your result if you put a float gray through cvtColor would be between 0 and 1, and still be a float. Then when you convert it back to CV_8U, you would have to scale it by 255 again.

edit flag offensive delete link more

Comments

I see. I guess my exposure to high dynamic range image formats misled me. It seems reasonable not to auto scale floats, especially since the cv::Mat is much more generic than images. I'm a little surprised by the row major storage, I'll have to double check the code that led me to believe that!

svenevs gravatar imagesvenevs ( 2017-12-07 20:36:25 -0600 )edit
3

answered 2017-12-06 20:16:31 -0600

sjhalayka gravatar image

updated 2017-12-07 16:12:12 -0600

The CV_32FC1 Mat type has a range from 1.17549e-38 to 3.40282e+38 (I got these values from std::numeric_limits<float>). If the value is less than 0, it is shown as black. If the value is greater than 1, it is shown as white. This all comes in handy when you want to flood fill all of the regions in your image with a unique colour, and you have more than 65536 regions to contend with.

I just tried passing a Mat raw pointer (using the .data member variable) to glTexImage2D(), and it works quite well -- no rotation or swizzling of the image occurs:

Mat image = imread("dot.png");
cvtColor(image, image, CV_BGR2RGB);
glGenTextures(1, &tex_id);
glBindTexture(GL_TEXTURE_2D, tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.cols, image.rows, 0, GL_RGB, GL_UNSIGNED_BYTE, image.data);

And for what it's worth, if you use a vertex + fragment shader to do render to texture, you're not limited to output values between 0 and 1. It's the same with compute shaders. It's only when you render to screen that you want to play nice and make sure that your output values are normalized.

edit flag offensive delete link more

Comments

Thanks for the response! @Tetragramm explains in their answer that the storage order is row major, which is why OpenGL is happy about it. For reference, cv::cvtColorwill perform the scaling operations I was talking about, namely that floating point color is to be in [0, 1]. As you mention, there is nothing in OpenGL that prevents this. However, values greater than 1 will just be maxed out at 1 ("super-white").

svenevs gravatar imagesvenevs ( 2017-12-07 20:40:05 -0600 )edit

I'll take your word for it, and will keep it in mind the next time I use cvtColor. It's unfortunate that it would truncate the data like that. What if those values encode properties such as four-position? The full float range is necessary

... and, is it just my misunderstanding, or are the camera matrices column-major?

sjhalayka gravatar imagesjhalayka ( 2017-12-07 22:07:58 -0600 )edit

I haven't had a moment to check the column major thing, will probably inspect when I get home. However, I misread the documentation on cv::cvtColor. What they're saying is that if you use it, they assume you already have your matrix in a valid color space (e.g., their example is showing that you need to divide everything by 255.0f if you had a floating point matrix).

svenevs gravatar imagesvenevs ( 2017-12-07 23:31:35 -0600 )edit

OK, sounds good. Here is my perspective projection matrix code (see get_perspective_matrix()):

http://answers.opencv.org/question/17...

sjhalayka gravatar imagesjhalayka ( 2017-12-08 10:49:06 -0600 )edit

cvtColor doesn't change the min/max values, other than the normal color space conversion. If you put in a float image of range [0,1] you get out a float image of range [0,1]. If you put in a float image of [0,255], you get out a float image of [0,255]. If you put in a CV_8U of [0,255], you get out a CV_8U of [0-255].

It's the conversions you use from CV_32F to CV_8U and back where you can insert the scaling operations.

Tetragramm gravatar imageTetragramm ( 2017-12-10 16:07:31 -0600 )edit

@Tetragramm -- Thanks for the clarification. Do you know if there is a single statement that can be used to convert from CV_8UC3 to CV_32FC1? Right now I create a new Mat and use a double for loop to iterate over its pixels.

sjhalayka gravatar imagesjhalayka ( 2017-12-10 20:44:07 -0600 )edit

the .convertTo function is what you want. for example:

cv::Mat intMat = whatever;
cv::Mat floatMat;
intMat.convertTo(floatMat, CV_32F, scale, add);
floatMat.converTo(intMat, CV_8U, scale, add);
Tetragramm gravatar imageTetragramm ( 2017-12-10 21:04:54 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-12-06 19:23:31 -0600

Seen: 11,558 times

Last updated: Dec 07 '17