Hi All,
I've got a very basic example that grabs frames from a camera, uploads them to a GpuMat, and displays them in a namedWindow with CV_WINDOW_OPENGL as its type. The code below works as expected, but I don't understand why I have to perform the data type conversion using the host mat.
#include <opencv2/core/core.hpp>
#include <opencv2/core/cuda.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
int main() {
using namespace cv;
using cv::cuda::GpuMat;
VideoCapture cap(0);
if (!cap.isOpened())
return -1;
// create opengl window
namedWindow("webcam", CV_WINDOW_OPENGL);
// Gpu mat to display
GpuMat g;
// Host frame buf
Mat frame;
bool grabFrame = true;
while (grabFrame) {
// Grab frame
cap >> frame;
// Why is this line necessary?
frame.convertTo(frame, CV_32F);
// Upload to gpu
g.upload(frame);
// convert to normalized float
g.convertTo(g, CV_32F, 1.f / 255);
// show in opengl window
imshow("webcam", g);
// maybe quit
if (cv::waitKey(30) >= 0)
grabFrame = false;
}
return 0;
}
If I comment out that line and try to perform the conversion and division in one step, I get a black image. I thought it could be the order in which the conversion happens (i.e scale followed by conversion), so I tried this
g.convertTo(g, CV_32F);
g.convertTo(g, CV_32F, 1.f / 255);
but had no luck. Checking the type of image the camera yields returns 16, which is a 3 channel single byte image. I tried replacing CV_32F with CV_32FC3 but it didn't make a difference. The thing that breaks it seems to be the type conversion.
Are there limits on the data types we can convert?