I can multiply a 10801920 pixel CV_32FC1 mat against another 33 Mat when using CPU based OpenCV, but when I convert the code to be Gpu Based, I get an error. Here is my CPU code
float matrix[3][3] = {{1.057311, -0.204043, 0.055648}, { 0.041556, 1.875992, -0.969256}, {-0.498535,-1.537150, 3.240479}};
Mat matrixMat = Mat(3, 3, CV_32FC1, matrix).t();
Mat orig_img_linear = linearMat.reshape(1, 1080*1920); Mat color_matrixed_linear = orig_img_linear * matrixMat; Mat final_color_matrixed = color_matrixed_linear.reshape(3, 1080);
When I run the following Gpu code, I get an error:
cv::gpu::multiply(orig_img_linear, matrixMat, color_matrixed_linear);
OpenCV Error: Assertion failed (src2.type() == src1.type() && src2.size() == src1.size()) in multiply, file /Users/patrickcusack/Downloads/opencv-2.4.11/modules/gpu/src/element_operations.cpp, line 934 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/patrickcusack/Downloads/opencv-2.4.11/modules/gpu/src/element_operations.cpp:934: error: (-215) src2.type() == src1.type() && src2.size() == src1.size() in function multiply
Is there a way to accomplish the CPU code with the GPU?