2016-09-14 00:28:12 -0600 | commented answer | YUV422 Packed format scaling Thanks a lot for your answer. 100x100 RGB image effectively has 3x100x100 bytes which when converted to YUV results in 3x100x100 bytes. Instead when this is converted to YUV2 (aka YUYV), this results in 4x100x100 bytes. And doubling the size of this image horizontally and vertically requires 4x(2x100)x(2x100). How do you get 150x100 and consequently 300x200? Also, how do you plot this newImg? |
2016-09-13 13:44:28 -0600 | received badge | ● Editor (source) |
2016-09-13 13:30:08 -0600 | asked a question | YUV422 Packed format scaling I am writing a scaling algorithm for YUV422 packed format images (without any intermediate conversions to RGB or grayscale or what have you). As can be seen in the below image from MSDN, the 4:2:2 format has 2 Luma bytes for each chroma byte. My test bench involves procuring images from the iSight camera using OpenCV APIs, converting them to YUV (CV_BGR2YUV) and then resizing them. The questions I have are:
Any references to principles of performing bilinear interpolation on YUYV images would be very helpful! Thanks in advance. |
2016-09-13 00:28:34 -0600 | commented answer | RGB to YUV color space conversion for a image Isn't it true that the BGR image is 3 channel, 24 bits per pixel and padded with additional 8 bits to make it 32-bits per pixel? How is the YCrCb image representation done? It would be 3 channel (2 for Y, 1 for Cr and 1 for Cb), but would it be 24-bits per pixel or 32-bits per pixel? What exact member variable of converted_image matrix be different from original_image matrix? |