Ask Your Question
0

OpenCV - OpenGL - OpenCL Interop

asked 2020-07-09 17:59:53 -0600

updated 2020-08-19 18:46:49 -0600

Hi guys

I'm writing a very basic program to monitor performance when copying from a cv::ogl::Texture2D to a cv::ogl::Buffer (using the copyTo function), and from there to an OpenCL cv::UMat (using cv::ogl::mapGLBuffer). It seems to me on paper this should work all on the GPU but I'm having trouble even running this code:

#include "mainwindow.h"
#include <QApplication>

#include <opencv2/core.hpp>
#include <opencv2/core/opengl.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/core/mat.hpp>

#include <QOpenGLWidget>
#include <QOpenGLExtraFunctions>
#include <QOpenGLShaderProgram>
#include <QOpenGLExtensions>

cv::UMat cvUMat;
cv::ogl::Texture2D* cvglTexture;
cv::ogl::Buffer cvglBuffer;

int main(int argc, char *argv[])
{
QApplication a(argc, argv);
MainWindow w;
w.show();

cv::ocl::setUseOpenCL(true);
assert(cv::ocl::haveOpenCL());
assert(cv::ocl::useOpenCL());
cv::ocl::Context::getDefault().create(cv::ocl::Device::TYPE_GPU);

cvglTexture->create(640,480,cv::ogl::Texture2D::Format::RGBA);

return a.exec();
}

This is the error I get:

Exception at 0x7ffc886da799, code: 0xe06d7363: C++ exception, flags=0x1 (execution cannot be continued) (first chance) in opencv_world430d!cv::UMat::deallocate

And this is my stack:

1  RaiseException         KERNELBASE           0x7ffc886da799 
2  CxxThrowException      VCRUNTIME140D        0x7ffc67097ec7 
3  cv::UMat::deallocate   opencv_world430d     0x7ffc12f21236 
4  cv::UMat::deallocate   opencv_world430d     0x7ffc12f21387 
5  cv::UMat::deallocate   opencv_world430d     0x7ffc12e7f464 
6  main                   main.cpp         30  0x7ff79585297f 
7  WinMain                qtmain_win.cpp   104 0x7ff79585667d 
8  invoke_main            exe_common.inl   107 0x7ff795854aad 
9  __scrt_common_main_seh exe_common.inl   288 0x7ff79585499e 
10 __scrt_common_main     exe_common.inl   331 0x7ff79585485e 
11 WinMainCRTStartup      exe_winmain.cpp  17  0x7ff795854b39 
12 BaseThreadInitThunk    KERNEL32             0x7ffc894e7bd4 
13 RtlUserThreadStart     ntdll                0x7ffc8ac6ce51

I'm using QT and OpenCV 430 (debug) built for VC15 (I'm using the QT 5.12.0 MSVC2017 compiler) on windows. I have other projects that use cv::UMat and everything runs smooth there but looks like there are some complications with OpenGL interop. Any thoughts on where to get started and what to check would definitely be helpful!

Cheers!

edit retag flag offensive close merge delete

Comments

Why use these features, which may work for this or that, maybe?

Why not use native OpenGL 4.3, so you can specify the compute shader and everything?

https://github.com/sjhalayka/qjs_comp...

There is no need for OpenCL.

sjhalayka gravatar imagesjhalayka ( 2020-08-19 20:57:21 -0600 )edit

btw:

cvglTexture->create(640,480,cv::ogl::Texture2D::Format::RGBA);

cvglTexture is never initialized. why the pointer at all ?

cv::ocl::Context::getDefault().create(cv::ocl::Device::TYPE_GPU);

please check return value !

also:

cout << cv::getBuildInformation()

please check, if OpenGL support is enabled at all

berak gravatar imageberak ( 2020-08-20 02:01:34 -0600 )edit

Thanks for your replies guys!

The reason I'm trying to get the OpenGL / OpenCL interop to work is because I'm trying to use OpenCV's machine learning modules on the GPU. UMats are based on OpenCL which is great, but I need to deliver the input data in form of OpenGL textures.

I actually made a little progress with this, Berak you were correct, I thought I had OpenGL support in this build but I didn't. I build OpenCV from source with OpenGL support and was able to get past the initial issue.

The main issue I face right now is copying the OpenGL buffer data over to the OpenCL UMat. In the code I'm about to post, I'm able to copy an OpenGL texture into a buffer. This works ok, still uses CPU which if I understand correctly, it shouldn't but thats fine ...(more)

rtavakkoli gravatar imagertavakkoli ( 2020-09-27 16:26:51 -0600 )edit

Here is some code to better clarify my issue right now:

//OpenGL / OpenCL interop objects
cv::ogl::Texture2D cvGLTexture;
cv::ogl::Buffer cvGLBuffer;

//OpenGL texture
cvGLTexture = cv::ogl::Texture2D(cv::Size(nodeSpecs.frameWidth,nodeSpecs.frameHeight),cv::ogl::Texture2D::Format::DEPTH_COMPONENT,spectrogramHandle,false);

//OpenGL pack PBO; set autorelease to true so the underlying OpenGL resource is destroyed
cvGLBuffer = cv::ogl::Buffer(cv::Size(nodeSpecs.frameWidth,nodeSpecs.frameHeight),CV_32FC1,cv::ogl::Buffer::Target::PIXEL_PACK_BUFFER/*,true*/);

//Copy from texture to PBO
cvGLTexture.copyTo(cvGLBuffer,CV_32F);

//Map PBO to OpenCL and use the UMat
cvUMat = cv::ogl::mapGLBuffer(cvGLBuffer); //GPU copy
rtavakkoli gravatar imagertavakkoli ( 2020-09-27 16:29:59 -0600 )edit

Hmmm. I am not familiar with the cv::ogl stuff.

I use a vector of float or unsigned int to hold the texture data.

Are you familiar with the vector container?

sjhalayka gravatar imagesjhalayka ( 2020-09-28 22:18:43 -0600 )edit

Yes, I'm familiar with the vector container. The issue is the input will always be in the form of an OpenGL texture handle which I'll need to use to get a UMat ideally all on the GPU

rtavakkoli gravatar imagertavakkoli ( 2020-10-04 20:19:34 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2020-10-06 11:59:36 -0600

sjhalayka gravatar image

updated 2020-10-06 12:00:24 -0600

I'm not so sure that there is such an interoperability between the two graphics APIs; for one, OpenCL was meant to replace OpenGL. OpenGL didn't get compute shaders until v4.3. That said, you can always read from the texture GPU buffer into a CPU buffer.

const size_t num_output_channels = 1;
vector<float> output_pixels(tex_w* tex_h * num_output_channels, 0.0f);

glGenTextures(1, &tex_output);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_output);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, tex_w, tex_h, 0, GL_RED, GL_FLOAT, NULL);
glBindImageTexture(0, tex_output, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);

glActiveTexture(GL_TEXTURE0);
glBindImageTexture(0, tex_output, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED, GL_FLOAT, &output_pixels[0]);
edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2020-07-09 17:56:54 -0600

Seen: 1,709 times

Last updated: Oct 06 '20