# Captured frame RGB to YUV and Reverse

I have this code which is a bit of old but it was used to convert RGB to YUV but for me doesn't work, i am using c++ and MVS2013 and opencv to make this happen, i want to capture the RGB frame convert it to YUV and send the values of YUV array through buffer and do the oposite on the recieve as in the code below but it doesnt convert it and have an error near the send function commented also.

Anyone with good knowledge on opencv would be nice to give me a good solution on it, anything can be changed except the functions of send and recieve which needs to get these data through buffer and i am not so expert to make it work well.

#define CLIP(X) ((X) > 255 ? 255 : (X) < 0 ? 0 : X)

// RGB -> YUV
#define RGB2Y(R, G, B) CLIP((( 66 * (R) + 129 * (G) +  25 * (B) + 128) >> 8) +  16)
#define RGB2U(R, G, B) CLIP(((-38 * (R) -  74 * (G) + 112 * (B) + 128) >> 8) + 128)
#define RGB2V(R, G, B) CLIP(((112 * (R) -  94 * (G) -  18 * (B) + 128) >> 8) + 128)

// YUV -> RGB
#define C(Y) ((Y) - 16  )
#define D(U) ((U) - 128 )
#define E(V) ((V) - 128 )

#define YUV2R(Y, U, V) CLIP((298 * C(Y) + 409 * E(V) + 128) >> 8)
#define YUV2G(Y, U, V) CLIP((298 * C(Y) - 100 * D(U) - 208 * E(V) + 128) >> 8)
#define YUV2B(Y, U, V) CLIP((298 * C(Y) + 516 * D(U) + 128) >> 8)

//////////// SEND FRAME
#define FRAME_WIDTH 640
#define FRAME_HEIGHT 480
Mat CamFrame;

VideoCapture CaptureCam(0);
CaptureCam.set(CV_CAP_PROP_FRAME_WIDTH, FRAME_WIDTH);
CaptureCam.set(CV_CAP_PROP_FRAME_HEIGHT, FRAME_HEIGHT);
if (!CaptureCam.isOpened()){
cout << "could not open camera" << endl;
}
namedWindow("Captures Cam", CV_WINDOW_AUTOSIZE);

while (TestingCLoop){
CaptureCam >> CamFrame;
if (!CamFrame.empty()){
IplImage *frame = new IplImage(CamFrame);
int32_t strides = { 1280, 640, 320 };
uint8_t *planes = {
(uint8_t *)malloc(frame->height * frame->width),
(uint8_t *)malloc(frame->height * frame->width / 4),
(uint8_t *)malloc(frame->height * frame->width / 4),
};

int x_chroma_shift = 1;
int y_chroma_shift = 1;

int x, y;
for (y = 0; y < frame->height; ++y) {
for (x = 0; x < frame->width; ++x) {
uint8_t r = frame->imageData[(x + y * frame->width) * 3 + 0];
uint8_t g = frame->imageData[(x + y * frame->width) * 3 + 1];
uint8_t b = frame->imageData[(x + y * frame->width) * 3 + 2];

planes[x + y * strides] = RGB2Y(r, g, b); ///////// IT HAS ERROR ON THIS "EMPTY"

if (!(x % (1 << x_chroma_shift)) && !(y % (1 << y_chroma_shift))) {
const int i = x / (1 << x_chroma_shift);
const int j = y / (1 << y_chroma_shift);
planes[i + j * strides] = RGB2U(r, g, b);
planes[i + j * strides] = RGB2V(r, g, b);
}
}
}
sendFrame(frame->width, frame->height, planes, planes, planes);
}
}

////////////////////////// RECIEVE FRAME
recieveFrame(uint16_t width, uint16_t height, uint8_t const *y, uint8_t const *u, uint8_t const *v, int32_t ystride, int32_t ustride, int32_t vstride){

ystride = abs(ystride);
ustride = abs(ustride);
vstride = abs(vstride);

uint16_t *img_data = (uint16_t *)malloc(height * width * 6);
unsigned long int i, j;
for (i = 0; i < height; ++i) {
for (j = 0; j < width; ++j ...
edit retag close merge delete

Is there a reason you aren't using cvtColor?

While you're at it, I'd replace the Ip1Image with Mat. This code is simple enough it would be easy.

@Tetragramm thanks for the response, i have tried this cvtColor(CamFrame, CamFrame, COLOR_BGR2YUV); but how do i get the values of y,u,v with the strides out of it ? the planes array stores and modifies the rgb to yuv values with strides in it and each one of them need to be send individually, how can i get this values from the cvtColor output Mat ?

Let's start at the beginning. Why are you converting to YUV just to undo it again?

This is the part of a code not the full code, and i need 2 parts one that i will convert the VideoCapture from camera to yuv and send it as y,u,v with strides from a buffer and the second to recieve y,u,v,ystride,ustride,vstride buffer and convert it to rgb and show it in a window of imshow("Recieved Cam" img) or cvShowImage(....) for e.x. To make it more clear i need to send from sendFrame() and recieveFrame() the values they need.

Sort by » oldest newest most voted Right, other libraries or old code is making it difficult. Not surprising.

So, you need to do three things. First, convert to YUV. That's simple. The output has YUV each as a channel.

To access them individually, you need to split the Mat into separate channels.

Then you merely access the members, and there you are.

To put them back together, you reverse the process. First map the pointers into Mats.

Then merge the Mats into one Mat, and convert back to BGR.

Mat CamFrame;
//Fill CamFrame.

cvtColor(CamFrame, CamFrame, COLOR_BGR2YUV);
//The YUV are each channel of the Mat.  ie: YUVYUVYUV

vector<Mat> YUV; //A vector of Mats, to hold the split channels.
split(CamFrame, YUV);
//YUV is now of size 3, with the first holding just Y values, the second just U, ect.

YUV.ptr<uchar>();//Y buffer.
YUV.ptr<uchar>();//U buffer.
YUV.ptr<uchar>();//V buffer.
YUV.step; //Y stride.
//ect...
//sendFrame

//At the other end
vector<Mat> YUV(3);
YUV = Mat(height, width, CV_8UC1, y, ystride); //Y Mat
//ect.

Mat img;//image buffer.
merge(YUV, img);
cvtColor(img, img, COLOR_YUV2BGR);
imshow("Captured Cam", img);
waitKey(0);
`

.

more

Thanks for the code cause i couldnt find how the split Mat function works. I dont understand the way though that i will have to manage the y,u,v buffers, do i have to place them in a for loop as above and get into a plane cause sending them y,u,v as is gives me a crash cause of wrong values i send :/ If you have any good tutotial i could learn also or code to convert them the way the above code did would be much appreciated

Official site

GitHub

Wiki

Documentation