Ask Your Question

protonmesh's profile - activity

2019-06-17 16:55:10 -0600 received badge  Popular Question (source)
2015-11-12 17:28:15 -0600 received badge  Good Question (source)
2015-07-11 09:38:48 -0600 received badge  Nice Question (source)
2014-08-08 08:52:37 -0600 received badge  Critic (source)
2014-08-06 15:27:48 -0600 received badge  Student (source)
2014-08-06 15:18:52 -0600 asked a question OpenCV Displaying UMat Efficiently

I'm excited about OpenCV's transparent API design and the ability to hardware accelerate image processing on a UMat on platforms that support it.

But how do we go about efficiently displaying a UMat?

// (Naive) Approach 1: Displaying UMat with imshow + non-OpenGL namedWindow
int _tmain(int argc, _TCHAR* argv[])
{
    std::string window_name = "Displaying UMat";
    Mat img_host = imread("Resources/win7.jpg");
    UMat img_device;

    img_host.copyTo(img_device);

    imshow(window_name, img_device);
    waitKey();
}

In the naive approach where imshow uses Win32 GDI display, the UMat must be copied from the OpenCL device (GPU) to the host (CPU), correct?

// Approach 2: Displaying UMat with imshow + namedWindow(OPENGL)
int _tmain(int argc, _TCHAR* argv[])
{
    std::string window_name = "Displaying UMat";
    Mat img_host = imread("Resources/win7.jpg");
    UMat img_device;

    img_host.copyTo(img_device);

    namedWindow(window_name, WINDOW_OPENGL | WINDOW_AUTOSIZE);

    imshow(window_name, img_device);
    waitKey();
}

I would've assumed that in calling imshow with UMat, the behaviour would be something similar to what is done for GpuMat: copy to a seperate buffer -> bind buffer to GL_PIXEL_UNPACK_BUFFER -> create texture from buffer -> render texture.

But for displaying UMat, it seems the getMat() is called on the UMat, which effectively maps OpenCL device memory to host (CPU) memory. Then, glTexSubImage2D is called, passing a pointer to the mapped OpenCL buffer.

I don't know how the mechanics of the execution of such a texture upload statement. It would be great if the driver knew that the data pointer we are passing to glTexSubImage2D is mapped pointer (to GPU memory) and performs a DMA-copy from the mapped region to texture object's data store.

Or does the more inefficient alternative occur. Ie. the CPU copies the UMat mapped memory into CPU memory, and the OpenGL driver once again uploads the data back to a texture object. Does the data make a roundtrip from GPU->CPU->GPU?

2014-08-06 09:49:34 -0600 received badge  Necromancer (source)
2014-08-06 09:11:50 -0600 answered a question OpenGL interoperability

In the draw callback, consider the use of: C++: void ogl::render(const Texture2D& tex, Rect_<double> wndRect=Rect_<double>(0.0, 0.0, 1.0, 1.0), Rect_<double> texRect=Rect_<double>(0.0, 0.0, 1.0, 1.0))

which encapsulates the GL commands necessary to display a texture mapped to a rectangle in the screen plane.

2014-08-06 08:59:45 -0600 answered a question OpenCV error by using OpenGL

I was having the same problem. My mistake:

Making OpenGL calls, for example like the one you posted: cv::ogl::Texture2D texture(img);

without first having an OpenGL context current on the calling thread. Because a texture2D needs to be created within a certain encapsulated context.

The OpenGL context can constructed either through the OpenCV provided HighGUI module, using namedWindow(<window_name>, WINDOW_OPENGL), which creates a new window and corresponding OpenGL context, and sets the context of that window active (look at setOpenGLContext(..) for switches between contexts of multiple namedWindows ) .

The context can also be created through 3rd party windowing / multimedia libraries (GLUT, OpenTK, SDL, GLFW, etc.) or full blown UI toolkits (Qt, wxWidgets) (see https://www.opengl.org/wiki/Related_toolkits_and_APIs). Whichever library you use to create the context, ensure that the context is made active (the library should provide a method for this) and all subsequent calls from the ogl:: namespace should work.