Ask Your Question

simozz's profile - activity

2021-06-22 08:00:26 -0600 received badge  Notable Question (source)
2020-02-11 08:50:27 -0600 received badge  Popular Question (source)
2019-10-29 05:26:04 -0600 received badge  Notable Question (source)
2019-09-26 16:43:06 -0600 received badge  Popular Question (source)
2018-12-14 14:01:53 -0600 received badge  Popular Question (source)
2017-09-11 16:42:25 -0600 marked best answer C++ - Y'CbCr422 to RGB - convert from raw data file

Hello,

I am trying to process YCrCb data from file and convert it to RGB.
For this task I am using the following code:

#define DEFAULT_FRAME_Y 288
#define DEFAULT_FRAME_X 544

Mat ycbcrFrame = Mat::zeros(DEFAULT_FRAME_Y, DEFAULT_FRAME_X, CV_8UC3);
Mat rgbFrame = Mat::zeros(DEFAULT_FRAME_Y, DEFAULT_FRAME_X, CV_8UC3);

const char rawFileName[] = "data.raw";
// data.raw is 544 * 288 * 2 = 313344 bytes long
uint32_t rawSize = 2 * DEFAULT_FRAME_Y * DEFAULT_FRAME_X;

FILE *file = fopen(rawFileName, "r");
if (file == NULL)
{
    cout << "Error opening " << rawFileName << endl;
    return 1;
}

fread(ycbcr.data, sizeof(char), rawSize, file);
fclose(file);
cvtColor(ycbcr, rgbFrame, CV_YCrCb2BGR);
imwrite("imageRGB.png", rgbFrame);

The good RGB image taken from the camera is the following:

image description

while the generated image is:

image description

The data.raw file has been generated from the following gst-launch pipeline execution:

gst-launch-1.0 -e v4l2src device=/dev/webcam ! videoconvert ! video/x-raw,width=544,height=288,framerate=10/1 ! multifilesink location=data.raw

while the v4l2-ctl --list-formats --device=/dev/webcam output is the following:

ioctl: VIDIOC_ENUM_FMT
    Index       : 0
    Type        : Video Capture
    Pixel Format: 'YUYV'
    Name        : YUYV 4:2:2

    Index       : 1
    Type        : Video Capture
    Pixel Format: 'MJPG' (compressed)
    Name        : Motion-JPEG

What's wrong with my code and how can I solve this problem ?

EDIT: changing the code as follows: #define DEFAULT_FRAME_Y 288 #define DEFAULT_FRAME_X 544

Mat ycbcrFrame = Mat::zeros(DEFAULT_FRAME_Y, DEFAULT_FRAME_X, CV_8UC2);
Mat rgbFrame = Mat::zeros(DEFAULT_FRAME_Y, DEFAULT_FRAME_X, CV_8UC3);

const char rawFileName[] = "data.raw";
// data.raw is 544 * 288 * 2 = 313344 bytes long
uint32_t rawSize = 2 * DEFAULT_FRAME_Y * DEFAULT_FRAME_X;

FILE *file = fopen(rawFileName, "r");
if (file == NULL)
{
    cout << "Error opening " << rawFileName << endl;
    return 1;
}

fread(ycbcr.data, sizeof(char), rawSize, file);
fclose(file);
cvtColor(ycbcr, rgbFrame, CV_YUV2BGR_UYVY);
imwrite("imageRGB.png", rgbFrame);

produces this image:

image description

which is still wrong.

2017-09-11 10:49:24 -0600 commented question C++ - Y'CbCr422 to RGB - convert from raw data file

Solved. After the last edited code, I must use CV_YUV2BGR_YUY2 value in cvtColor.

2017-09-11 10:40:49 -0600 edited question C++ - Y'CbCr422 to RGB - convert from raw data file

C++ - Y'CbCr422 to RGB - convert from raw data file Hello, I am trying to process YCrCb data from file and convert it t

2017-09-11 09:54:04 -0600 edited question C++ - Y'CbCr422 to RGB - convert from raw data file

C++ - Y'CbCr422 to RGB - convert from raw data file Hello, I am trying to process YCrCb data from file and convert it t

2017-09-11 09:51:41 -0600 asked a question C++ - Y'CbCr422 to RGB - convert from raw data file

C++ - Y'CbCr422 to RGB - convert from raw data file Hello, I am trying to process YCrCb data from file and convert it t

2017-09-07 05:39:56 -0600 asked a question Doubt with OpenCV VideoCapture and gstreamer pipeline

Doubt with OpenCV VideoCapture and gstreamer pipeline Hello, To make VideoCapture be able to open a gstreamer pipeline

2017-08-29 03:58:04 -0600 asked a question OpenCV 3.3 - CMake cannot find ffmpeg libraries

OpenCV 3.3 - CMake cannot find ffmpeg libraries I am trying to cross compile opencv libraries with FFMPEG support but Op

2017-08-25 03:13:19 -0600 commented question OpenCV 3.3 - gstreamer and libva error

Good to know it. I am recompiling it. Do you know if I have to use a pipeline as gstreamer ?

2017-08-25 02:33:30 -0600 commented question OpenCV 3.3 - gstreamer and libva error

Do you mean to try FFMPEG as an external program ? I tried and it generates the video, but I don't have enough space to store some GB of images to make a video of a bunch of MB. This is not the best solution. Perhaps using libav directly from the code leads to better results..

2017-08-23 11:56:12 -0600 commented question OpenCV 3.3 - gstreamer and libva error

No. Sorry but actually I am noob with gstreamer & I don't know how to execute it (complete cmd syntax). What is the cmd I have to try ?

2017-08-23 07:56:20 -0600 commented question OpenCV 3.3 - gstreamer and libva error

Hello, actually vainfo returns the same error. Looking for vaapi drivers, I can see as these are installed.

2017-08-22 07:05:41 -0600 asked a question OpenCV 3.3 - gstreamer and libva error

Hello, I am trying to use H264 codec using the following initialization for VideoWriter:

VideoWriter writer = VideoWriter("file.mp4", CV_FOURCC('H', '2', '6', '4'), 15, Size(300, 500));

and I run into the following exception:

(myProg:20371): GStreamer-CRITICAL **: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
OpenCV Error: Unspecified error (GStreamer: cannot link elements
) in CvVideoWriter_GStreamer::open, file /path/to/opencv/sources/opencv-3.3/modules/videoio/src/cap_gstreamer.cpp, line 1626
VIDEOIO(cvCreateVideoWriter_GStreamer (filename, fourcc, fps, frameSize, is_color)): raised OpenCV exception:

/path/to/opencv/sources/opencv-3.3/modules/videoio/src/cap_gstreamer.cpp:1626: error: (-2) GStreamer: cannot link elements
 in function CvVideoWriter_GStreamer::open

The video file is neither created.
The command line I use to compile OpenCV libraries is the following:

cmake   -D CMAKE_TOOLCHAIN_FILE=../platforms/linux/gnu.toolchain.cmake \
        -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules/ \
        -D CMAKE_INSTALL_PREFIX=${OPENCV_DIR} \
        -D WITH_JPEG=OFF \
        ../ && make -j 8

As I understand after googling a while, it could be an error related to the filename, but I cannot understand what's really wrong with my code..

EDIT The gstream error seems to be corrected with the following VideoWriter initialization:

VideoWriter writer = VideoWriter("appsrc ! autovideoconvert ! v4l2video1h264enc extra-controls=\"encode,h264_level=10,h264_profile=4,frame_level_rate_control_enable=1,video_bitrate=2000000\" ! h264parse ! rtph264pay config-interval=1 pt=96 ! filesink location=file.mp4", CV_FOURCC('H', '2', '6', '4'), 15, Size(300, 500));

but now I fall into libva error:

error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 0.36.0
libva info: va_getDriverName() returns -1
libva error: va_getDriverName() failed with unknown libva error,driver_name=(null)

What is the correct way to initialize VideoWriter object for H264 ?

2017-08-10 03:02:19 -0600 asked a question Tracking closed countours surfaces

Hello,

In my application I use a standard way to get contours and display on the frame, for example:

std::vector <std::vector <cv::Point>> contours;
std::vector <std::vector <cv::Point>> :: iterator itc;
// after some image processing code ...
findContours(contoursMask, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
drawContours(outFrame, contours, -1, cv::Scalar(0,0,255), 2);

But I would like to track/enumerate each enclosed countour surface.
E.g.: given this code, I would like to give an identifier to each of the four (closed) contours surfaces.

Is there already some OpenCV feature for this task ?

2017-04-15 08:59:07 -0600 commented question Unable to stop the stream: Inapproriate ioctl for device - ARM

Hello Steven, I tried enabling libv4l and not, but the problem persisted. I will try with ffmpeg support. Thank you.

2017-04-13 07:54:42 -0600 asked a question Unable to stop the stream: Inapproriate ioctl for device - ARM

Hi,

I am working on Linux systems with both ARM (arm32 and arm64) and x86/x86_64 arquitectures and I am experiencing a problem while I try to open a video file (in this case is an avi file), just on ARM arch (bot 32 and 64) .

Once I try to open the file, the following error is fired up:

Unable to stop the stream: Inapproriate ioctl for device

Looking into the source code, it seems a problem related to libv4l, since the error message it's executed at least from one of these files:line

modules/videoio/src/cap_v4l.cpp:1833 
modules/videoio/src/cap_libv4l.cpp:1879

Since the error is fired only on ARM system (verified on Beaglebone Black and DragonBoard 410c), I understand this is a problem related with libv4l and not with opencv libs. Or maybe just the interaction with both under ARM arquitecture (perhaps some macro not properly defined ?). I don't know.

Do you already know this issue ? If so, do you know a workaround for this ?

Thank you.

2017-01-18 12:39:00 -0600 asked a question What is the most efficient way to read the frame rate from a camera ?

I need to process real time images using my laptop camera and a Logitech C270 USB, and I need to determine the frame rate. For video files the solution is very trivial, just use

VideoCapture capture;
capture.get(CV_CAP_PROP_FPS)

for cameras, seems to be not as easy. The program I am writing will run on a Linux platform and the solution I wrote and testing is this one (resumed):

bool initTimeout;
bool fpsTimeout;

void AlarmHandler(int sig)
{
    if (initTimeout)
        initTimeout = false;
    else
        fpsTimeout = false;
}

int main(int argc, char *argv[])
{    
    // frame rate variable
    uint32_t fps = 0;

    VideoCapture capture;
    capture.open(argv[1]);

    if (inputFilename.find("/dev/video") != std::string::npos) 
    {
        fps = 0;
        initTimeout = true;
        fpsTimeout = true;

        cout << "Data input from a camera. Initialisation timeout" << endl;
        alarm(5);
        while (initTimeout)
            capture >> currentFrame;

        cout << "Calculating framerate ... " << endl;
        alarm(1);
        while (fpsTimeout)
        {
            capture >> currentFrame;
            if(! currentFrame.empty())
                fps++;
        }
    }
}

I implemented the initialisation timeout becuase I noticed that the camera need a frames fflush before the timeout used for frame rate.

Comparing the results with VLC player, it gives 30 frames per second for the laptop camera and 25 for the USB camera.

My program sometimes agrees with VLC, sometimes it doesn't, returning different results (16 for the USB camera, 31 for the laptop camera etc.).

Do you know a more efficient way to read the frame rate from a video camera ?

2017-01-17 10:46:15 -0600 commented answer Put two or more frames in one output window

I modified the previous comment. Thank you.

2017-01-17 10:34:37 -0600 commented answer Put two or more frames in one output window

Yes. Problem solved for it. Thanks But, the foreground and postfgmask are shown entirely black in merged while are shown correctly in a separate imshow window.. They are respectively a binary image and a grayscale image obtained after a gaussianblur filtering. I tried to convert them to a CV_8UC3 but thse are always shown black (all 0).

2017-01-17 09:55:17 -0600 commented answer Put two or more frames in one output window

Hi, thank you for your help but changing the code as suggested (of course is correct as you write) does not change the situation. Same error.

2017-01-17 08:10:14 -0600 asked a question Put two or more frames in one output window

I have 4 frames I would like to put all them in one window..

I am following this tutorial which should be perfect for my objective .. but my code is not working (I don't know if the code of the blogger works..).

This is the code I am working on (resumed)

VideoCapture capture;
Mat currentFrame;
Mat foreground;
Mat postfgmask;
Mat resultframe;

Size imgFixedSize = Size(500, 350);

Mat roi;
Mat merged = Mat(Size(imgFixedSize*height*2, imgFixedSize*width*2), CV_8UC3);

/* initialize capture, not shown .. */ 

while (true)
{
    capture >> currentFrame;
    resize(currentFrame, currentFrame, imgFixedSize);

    /* 
        image processing tasks, not shown .. 
        all frames are Size(500, 350) ..
    */ 

    roi = Mat(merged, Rect(0, 0, 500, 350));
    currentFrame.copyTo(roi);

    roi = Mat(merged, Rect(500, 0, 500, 350));
    foreground.copyTo(roi);

    roi = Mat(merged, Rect(0, 350, 500, 350));
    postfgmask.copyTo(roi);

    roi = Mat(merged, Rect(500, 350, 500, 350));
    resultframe.copyTo(roi);

    imshow("Output", merged);
    writer.write(merged);
}

But when I execute the program, it crashes giving this errror to the output:

OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /home/user/opencv/modules/core/src/matrix.cpp, line 522
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/user/opencv/modules/core/src/matrix.cpp:522: error: (-215) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function Mat

Aborted

It seems that the error is due to frames size .. But the dimensions are quiet clear and the output window is doubled in size so it can fit 4 frames..

So how do I have to change my code to make it works ?

2017-01-17 07:54:59 -0600 marked best answer what is the role of mask passed to calcHist argument ?

I have a doubt regarding the calcHist function:

From here I read:

"Optional mask. If the matrix is not empty, it must be an 8-bit array of the same size as images[i] . The non-zero mask elements mark the array elements counted in the histogram."

Is not clear to me how can be used the mask.

2017-01-17 07:54:34 -0600 marked best answer Stauffer & Grimson algorithm

Looking at the original paper (http://www.ai.mit.edu/projects/vsam/P...), and at this page http://docs.opencv.org/trunk/db/d5c/t..., the question is:

There is already an implementation of the Stauffer & Grimson algorithm in opencv ?

2017-01-17 07:54:18 -0600 marked best answer Problem setting pixel value

I know this is a well known topic but for all the solutions I tried (major of them from stack overflow Q&A), I cannot set a pixel value as I want.

Given this code:

Mat sgWs = Mat(frameFixedSize.height * frameFixedSize.width, nColumns, CV_8UC1);
for(uint32_t x = 0; x < (uint32_t) frameFixedSize.height; x++)
{
    for(uint32_t y = 0; y < (uint32_t) frameFixedSize.width; y++)
    {
        if (!y)
            sgWs.at<uchar>(x, y) = 1;
    }
}

the pixel values are not set to one. The istruction:

if (!y)
     sgWs.at<cv::Vec3b>(y,x)[0] = 1;

neither works. So how do I have to change my code to make it works ?

2017-01-17 07:54:02 -0600 marked best answer Shapes processing and tracking in C++

Until now I can draw shapes from a binary image maskFrame into lastFrame as follows:

cv::Mat lastFrame, maskFrame;
std::vector <std::vector <cv::Point>> contours;
// frame processing ... 
findContours(maskFrame, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
drawContours(lastFrame, contours, -1, cv::Scalar(0), 2);

which should be the standard way..

Now, I would need to track these shapes for at least two frames. If I have a shape in frame(n), I would like to know if this shape is also present in frame(n - 1) by some possible information I could get, as area of the shape & position inside the frame.

Is it possible to achieve this whit some other class ?