Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), and all the pertinent functions will work fine.

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), and all the pertinent functions will work fine.

Why don't you experiment with the CV_16UC1 format before you invest in a camera?

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), and all the pertinent functions will work fine.but it might not be supported by certain functions.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions.functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), CV_16UC1), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 CV_8UC1 or CV_8UC3, CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 CV_16UC1 format before you invest in a camera?

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

Converting CV_16UC1 to CV_8UC3 can be done

UINT16 a = uint16_frame_content.at<UINT16>(j, i);

BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

Converting CV_16UC1 to CV_8UC3 can be done with the following code:

UINT16 a = uint16_frame_content.at<UINT16>(j, i);

BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

Converting CV_16UC1 CV_16UC1 to CV_8UC3 CV_8UC3 can be done with the following code:code like:

UINT16 a = uint16_frame_content.at<UINT16>(j, i);

BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

Converting CV_16UC1 to CV_8UC3 can be done with code like:

Mat byte_frame_content(img.rows, img.cols, CV_8UC3);

for (int j = 0; j < uint16_frame_content.rows; j++)
{
    for (int i = 0; i < uint16_frame_content.cols; i++)
    {
        UINT16 a = uint16_frame_content.at<UINT16>(j, i);

 BYTE hi = static_cast<BYTE>(a >> 8);
 BYTE low = static_cast<BYTE>(a);
 byte_frame_content.at<Vec3b>(j, i)[0] = hi;
 byte_frame_content.at<Vec3b>(j, i)[1] = low;
 byte_frame_content.at<Vec3b>(j, i)[2] = 0;
    }
}

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

Converting CV_16UC1 to CV_8UC3 can be done with code like:

Mat byte_frame_content(img.rows, img.cols, byte_frame_content(uint16_frame_content.rows, uint16_frame_content.cols, CV_8UC3);

for (int j = 0; j < uint16_frame_content.rows; j++)
{
    for (int i = 0; i < uint16_frame_content.cols; i++)
    {
        UINT16 a = uint16_frame_content.at<UINT16>(j, i);

        BYTE hi = static_cast<BYTE>(a >> 8);
        BYTE low = static_cast<BYTE>(a);
        byte_frame_content.at<Vec3b>(j, i)[0] = hi;
        byte_frame_content.at<Vec3b>(j, i)[1] = low;
        byte_frame_content.at<Vec3b>(j, i)[2] = 0;
    }
}

I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.

Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?

Converting CV_16UC1 to CV_8UC3 can be done with code like:

Mat byte_frame_content(uint16_frame_content.rows, uint16_frame_content.cols, CV_8UC3);

for (int j = 0; j < uint16_frame_content.rows; j++)
{
    for (int i = 0; i < uint16_frame_content.cols; i++)
    {
        UINT16 a = uint16_frame_content.at<UINT16>(j, i);

        BYTE hi = static_cast<BYTE>(a >> 8);
        BYTE low = static_cast<BYTE>(a);
        byte_frame_content.at<Vec3b>(j, i)[0] = hi;
        byte_frame_content.at<Vec3b>(j, i)[1] = low;
        byte_frame_content.at<Vec3b>(j, i)[2] = 0;
    }
}