1 | initial version |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), and all the pertinent functions will work fine.
2 | No.2 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), and all the pertinent functions will work fine.
Why don't you experiment with the CV_16UC1 format before you invest in a camera?
3 | No.3 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), and all the pertinent functions will work fine.but it might not be supported by certain functions.
Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?
4 | No.4 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), but it might not be supported by certain functions.functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 or CV_8UC3, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.
Why don't you experiment with OpenCV and the CV_16UC1 format before you invest in a camera?
5 | No.5 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short
int int (e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1), CV_16UC1
), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1 CV_8UC1
or CV_8UC3, CV_8UC3
, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1.CV_16UC1
.
Why don't you experiment with OpenCV and the CV_16UC1 CV_16UC1
format before you invest in a camera?
6 | No.6 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int
(e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1
), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1
or CV_8UC3
, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1
.
Why don't you experiment with OpenCV and the CV_16UC1
format before you invest in a camera?
Converting CV_16UC1 to CV_8UC3 can be done
UINT16 a = uint16_frame_content.at<UINT16>(j, i);
BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;
7 | No.7 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int
(e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1
), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1
or CV_8UC3
, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1
.
Why don't you experiment with OpenCV and the CV_16UC1
format before you invest in a camera?
Converting CV_16UC1 to CV_8UC3 can be done with the following code:
UINT16 a = uint16_frame_content.at<UINT16>(j, i);
BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;
8 | No.8 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int
(e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1
), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1
or CV_8UC3
, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1
.
Why don't you experiment with OpenCV and the CV_16UC1
format before you invest in a camera?
Converting CV_16UC1 CV_16UC1
to CV_8UC3 CV_8UC3
can be done with the following code:code like:
UINT16 a = uint16_frame_content.at<UINT16>(j, i);
BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;
9 | No.9 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int
(e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1
), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1
or CV_8UC3
, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1
.
Why don't you experiment with OpenCV and the CV_16UC1
format before you invest in a camera?
Converting CV_16UC1
to CV_8UC3
can be done with code like:
Mat byte_frame_content(img.rows, img.cols, CV_8UC3);
for (int j = 0; j < uint16_frame_content.rows; j++)
{
for (int i = 0; i < uint16_frame_content.cols; i++)
{
UINT16 a = uint16_frame_content.at<UINT16>(j, i);
BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;
}
}
10 | No.10 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int
(e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1
), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1
or CV_8UC3
, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1
.
Why don't you experiment with OpenCV and the CV_16UC1
format before you invest in a camera?
Converting CV_16UC1
to CV_8UC3
can be done with code like:
Mat byte_frame_content(img.rows, img.cols, byte_frame_content(uint16_frame_content.rows, uint16_frame_content.cols, CV_8UC3);
for (int j = 0; j < uint16_frame_content.rows; j++)
{
for (int i = 0; i < uint16_frame_content.cols; i++)
{
UINT16 a = uint16_frame_content.at<UINT16>(j, i);
BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;
}
}
11 | No.11 Revision |
I'm not sure what camera you're using, but with the Kinect v1.x, the infrared image is greyscale and has a pixel type of unsigned short int
(e.g. a 16-bit integer). This type of image is supported by OpenCV (e.g. CV_16UC1
), but it might not be supported by certain functions. One function that comes to mind is applyColorMap. It requires that the input image is of type CV_8UC1
or CV_8UC3
, and it will give you an assertion failure if you try to pass in an input image of type CV_16UC1
.
Why don't you experiment with OpenCV and the CV_16UC1
format before you invest in a camera?
Converting CV_16UC1
to CV_8UC3
can be done with code like:
Mat byte_frame_content(uint16_frame_content.rows, uint16_frame_content.cols, CV_8UC3);
for (int j = 0; j < uint16_frame_content.rows; j++)
{
for (int i = 0; i < uint16_frame_content.cols; i++)
{
UINT16 a = uint16_frame_content.at<UINT16>(j, i);
BYTE hi = static_cast<BYTE>(a >> 8);
BYTE low = static_cast<BYTE>(a);
byte_frame_content.at<Vec3b>(j, i)[0] = hi;
byte_frame_content.at<Vec3b>(j, i)[1] = low;
byte_frame_content.at<Vec3b>(j, i)[2] = 0;
}
}