Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

why is CV_32FC1 not normalized to 0-1?

I needed to call an external CUDA library function that assumes the input data is CV_32FC1. I'm working with data that comes in on a frame to frame basis, in RGBA, but at this point in time the data is already on the GPU. Instead of downloading an image to use OpenCV to convert, I figured I'd just do it myself. The most related question I could find was this SO post asking about the difference between CV_32F and CV_32FC1. The original function I wrote

/// will normalize to [0, 1] then use sRGB conversion to return single float gray
inline __host__ __device__
float rgbaToGray(const uchar4 &src) {
    static constexpr float Wr = 0.2126f;
    static constexpr float Wg = 0.7152f;
    static constexpr float Wb = 0.0722f;
    static constexpr float inv255 = 1.0f / 255.0f;

    float r = Wr * ((float) src.x) * inv255;
    float g = Wg * ((float) src.y) * inv255;
    float b = Wb * ((float) src.z) * inv255;

    return r + g + b;
}

as it turns out, dividing by 255.0f was the fatal flaw. What are the assumptions made about the CV_32FC1 data type? Typically when I think about floating point color values, I assume they are in the range [0, 1] (this is the expectation for OpenGL at least).

I couldn't really find any explicit documentation on what the value ranges are, do such expectations exist in OpenCV? For example, if I wanted to use CV_32FC3 for RGB values, do these values need to be in [0, 1]?

Thank you for any guidance on what the OpenCV assumptions about image values are. The main reason I was avoiding using OpenCV to do anything directly was because

  1. The data was already on the GPU, and I'm not assuming users of the code have the CUDA backend for OpenCV installed.
  2. My understanding is OpenCV demands column-major BGRA storage. My data comes in from the device as row-major RGBA.