Ask Your Question

rbaleksandar's profile - activity

2020-11-06 00:25:46 -0600 received badge  Notable Question (source)
2018-03-26 11:34:06 -0600 received badge  Popular Question (source)
2016-08-09 08:32:55 -0600 received badge  Necromancer (source)
2016-06-02 01:13:25 -0600 received badge  Scholar (source)
2016-06-02 01:13:20 -0600 commented answer OpenCV 3.1 build from source - core test fails for 3 tests

Thanks for the info. Yeah, I guess I'll have to stick to C++ with Caffe. Will also give the Matlab bindings a go. Hope it works.

2016-06-01 16:29:48 -0600 asked a question OpenCV 3.1 build from source - core test fails for 3 tests

Hi!

I've just finished compiling OpenCV 3.1 on my 64bit Debian Jessie with a lot of features enabled (Qt, OpenGL, Matlab (I have R2014), Python2 and 3, Tesseract, Ceres etc.). Now I've started running the test. Sadly the first test fails (I haven't continued with the rest since I'd like to resolved the issues one by one). The failed tests in opencv_test_core are:

  • Core_globbing.accuracy - I've read somewhere that this fails do to a missing input image. Is that correct?
  • hal_intrin.float32x4 - in cmake-gui I get OpenCV_HAL_DIR-NOTFOUND along with empty OPENCV_HAL_HEADERS and OPENCV_HAL_LIBS. So my guess would be that this is the reason. I have no idea how to enable HAL support though. Any pointers how to do that?
  • hal_intrin.float64x2 - same as with hal_intrin.float32x4

Just a side-note which concerns Caffe - has anyone managed to build OpenCV 3.1 with Caffe support? I tried to build Caffe but it fails for both Python 2.7 and 3.4 (the version I'm having) because of python-dateutils being 2.5 (or something similar) and the requirement being a version >=1.4 and <2. Am I supposed to handle even such low level dependencies?!?

2015-12-03 16:43:59 -0600 received badge  Supporter (source)
2015-12-03 16:43:59 -0600 received badge  Supporter (source)
2015-12-03 16:43:58 -0600 received badge  Supporter (source)
2015-12-03 16:43:34 -0600 commented answer Bindings to access openCV from MATLAB do not generate

This is the answer! It works on Linux too! Thanks a lot and +1 from me.

2015-11-29 22:50:31 -0600 asked a question OpenCV 3.0.0 - Cmake fails to detect Python 3

Old problem yet I got it just now. On my Debian Jessie the build process went as smooth as one can hope for. On Ubuntu 14.04 - epic fail. Even after setting the paths manually I still don't get Python 3 support (I have Python 3.4m). It might be due to this bug which was supposedly fixed. I have cloned the latest stable from the git repository so I should have gotten the fix that was posted 7 months ago. However the exact same behaviour described by the person who reported the bug happens to me too (except that in my case PYTHON instead of PYTHON2 is doing the deed). I have 3 Python groups in cmake-gui:

  • PYTHON2 - here everything is detected properly
  • PYTHON3 - here I had to manually add the required paths
  • PYTHON - I suppose this should point to PYTHON3 but who knows...I tried entering the exact same paths for PYTHON_INCLUDE_DIR, PYTHON_LIBRARY and PYTHON_LIBRARY_RELEASE (I presume this should contain the same as PYTHON_LIBRARY???) that I used for my PYTHON3 setup.

cmake-gui keeps alternating between showing all components of PYTHON and hiding those whenever I click on the Generate button. I have no idea what I'm doing wrong.

Upon executing cmake from the terminal (and also passing the above mentioned flags for PYTHON like PYTHON_LIBRARY etc.) I get following message:

Could NOT find PythonLibs (missing:  PYTHON_INCLUDE_DIRS) (found suitable exact version "3.4.3")

The funny thing is that inside the CMakeLists.txt I am unable to find any trace of this PYTHON but only of PYTHON2 and PYTHON3.

Any suggestions? I would really want to work with Python 3 instead of Python 2.7.

2014-11-22 04:23:10 -0600 asked a question Linux and multiple USB webcams cause reduced frame resolution and v4l2 error

I have two Logitech Pro 9000 webcams. I have discovered a strange behaviour in cv::VideoCapture::set() when setting the frame size (width and height) for my captures resulting in the infamous error

libv4l2: error turning on stream: No space left on device
VIDIOC_STREAMON: No space left on device
ERROR: Could not read from video stream

for my second camera. In order to fix it I have to reduce my frame size almost 2 times from the initial one. Now here is the interesting thing:

  • Version 1 (without using cv::VideoCapture::set()) - I manage to get both cameras up and running at 15fps (I tried with 20fps but I get the error I have mentioned above) with a resolution of 640x480, which seems to be a sort of a hidden default for those cameras (I was unable to find in the source code of cv::VideoCapture where this is set) if you don't specify these. The two values are retrieved by using cv::VideoCapture::get(CV_CAP_PROP_FRAME_WIDTH) and cv::VideoCapture::get(CV_CAP_PROP_FRAME_HEIGHT) respectively. Here is a small example:

    // The indices are 1 and 2 since 0 is my built-in webcam (I'm using a notebook)
    cv::VideoCapture cap1(1);
    cv::VideoCapture cap2(2);
    
    if(!cap1.isOpened())
    {
      std::cout << "Cannot open the video cam [1]" << std::endl;
      return -1;
    }
    
    if(!cap2.isOpened())
    {
      std::cout << "Cannot open the video cam [2]" << std::endl;
      return -1;
    }
    
    // Set both cameras to 15fps
    cap1.set(CV_CAP_PROP_FPS, 15);
    cap2.set(CV_CAP_PROP_FPS, 15);
    
    double dWidth1 = cap1.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight1 = cap1.get(CV_CAP_PROP_FRAME_HEIGHT);
    double dWidth2 = cap2.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight2 = cap2.get(CV_CAP_PROP_FRAME_HEIGHT);
    
    // Here I display the frame size that OpenCV has picked for me - it is 640x480 for both cameras
    std:: cout << "cam[1] Frame size: " << dWidth1 << " x " << dHeight1 << std::endl;
    std::cout << "cam[2] Frame size: " << dWidth2 << " x " << dHeight2 << std::endl;
    cv::namedWindow("cam[1]",CV_WINDOW_AUTOSIZE);
    cv::namedWindow("cam[2]",CV_WINDOW_AUTOSIZE);
    
    while(1)    
    {    
      cv::Mat frame1, frame2;    
      bool bSuccess1 = cap1.read(frame1);    
      bool bSuccess2 = cap2.read(frame2);
    
      if (!bSuccess1)
      {
        std::cout << "Cannot read a frame from video stream [1]" << std::endl;
        break;
      }
    
      if (!bSuccess2)
      {
        std::cout << "Cannot read a frame from video stream [2]" << std::endl;
        break;
      }
    
      cv::imshow("cam[1]", frame1);
      cv::imshow("cam[2]", frame2);
    
      if(cv::waitKey(30) == 27)
      {
        std::cout << "ESC key is pressed by user" << std::endl;
        break;
      }
    }
    

This example is working without any issues.

  • Version 2 (using cv::VideoCapture::set()) - if I take the exact same values that I retrieve using cv::VideoCapture::get() and use them with cv::VideoCapture::set() to setup the exact same parameters the above mentioned error occurs:

    cv::VideoCapture cap1(1);
    cv::VideoCapture cap2(2);
    
    if(!cap1.isOpened())
    {
      std::cout << "Cannot open the video cam [1]" << std::endl;
      return -1;
    }
    
    if(!cap2.isOpened())
    {
      std::cout << "Cannot open the video cam [2]" << std::endl;
      return -1;
    }
    
    cap1.set(CV_CAP_PROP_FPS, 15);
    cap2.set(CV_CAP_PROP_FPS, 15);
    
    // Values taken from output of Version 1 and used to setup the exact same parameters with the exact same values!
    cap1 ...
(more)
2014-11-13 16:04:53 -0600 asked a question Multiple cameras and using set() for adjusting frame's witdth and height result in libv4l2 error

Hello!

I have two Logitec Pro 9000 webcams connected to my 64bit Debian notebook. I want to do some stereo-vision and so far it is working great (stereoCalibrate etc.). As you know if the data output is too great (resolution and framerate) you get (especially when using USB cameras) the infamous error:

libv4l2: error turning on stream: No space left on device
VIDIOC_STREAMON: No space left on device
ERROR: Could not read from video stream

I have discovered that for some reason that

cv::VideoCapture::set(CV_CAP_PROP_FRAME_HEIGHT, some_value)

and

cv::VideoCapture::set(CV_CAP_PROP_FRAME_WIDTH, some_value)

don't seem to be working. If I run 2 VideoCapture-s simultaneously I see that the default resolution for these two cameras is 640x480, which I see when I call the cv::VideoCapture::get() equivalent of those two methods I mentioned above. I have to limit the framerate to 15 in order to avoid that libv4l2 related error. Below I have two versions of a very simple program where I set my framerate to 15 and in the second version I also set the exact same resolution that I get when I call cv::VideoCapture::get(CV_CAP_PROP_FRAME_WIDTH/HEIGHT) in the first version:

  • Version 1 (working, no manual setup of the resolution, only framerate):

    VideoCapture cap1(1);
    VideoCapture cap2(2);
    
    if(!cap1.isOpened())
    {
      cout << "Cannot open the video cam [1]" << endl;
      return -1;
    }
    
    if(!cap2.isOpened())
    {
      cout << "Cannot open the video cam [2]" << endl;
      return -1;
    }
    
    cap1.set(CV_CAP_PROP_FPS, 15);
    cap2.set(CV_CAP_PROP_FPS, 15);
    
    double dWidth1 = cap1.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight1 = cap1.get(CV_CAP_PROP_FRAME_HEIGHT);
    double dWidth2 = cap2.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight2 = cap2.get(CV_CAP_PROP_FRAME_HEIGHT);
    
    cout << "cam[1] Frame size: " << dWidth1 << " x " << dHeight1 << endl;
    cout << "cam[2] Frame size: " << dWidth2 << " x " << dHeight2 << endl;
    namedWindow("cam[1]",CV_WINDOW_AUTOSIZE);
    namedWindow("cam[2]",CV_WINDOW_AUTOSIZE);
    
    while(1)    
    {    
      Mat frame1, frame2;    
      bool bSuccess1 = cap1.read(frame1);    
      bool bSuccess2 = cap2.read(frame2);
    
      if (!bSuccess1)
      {
        cout << "Cannot read a frame from video stream [1]" << endl;
        break;
      }
    
      if (!bSuccess2)
      {
        cout << "Cannot read a frame from video stream [2]" << endl;
        break;
      }
    
      cv::addText(frame1, "cam[1]", Point2f(10,25), cv::fontQt("FONT_HERSHEY_SCRIPT_SIMPLEX", 15, cv::Scalar(255,0,0)));
      cv::addText(frame2, "cam[2]", Point2f(10,25), cv::fontQt("FONT_HERSHEY_SCRIPT_SIMPLEX", 15, cv::Scalar(255,0,0)));
      imshow("cam[1]", frame1);
      imshow("cam[2]", frame2);
    
      if(waitKey(30) == 27)
      {
        cout << "ESC key is pressed by user" << endl;
        break;
      }
    }
    
  • Version 2 (not working, manual setup framerate, manual setup of the resolution results in error I've mentioned at the beginning of my question once I try to capture from the second camera):

    VideoCapture cap1(1);
    VideoCapture cap2(2);
    
    if(!cap1.isOpened())
    {
      cout << "Cannot open the video cam [1]" << endl;
      return -1;
    }
    
    if(!cap2.isOpened())
    {
      cout << "Cannot open the video cam [2]" << endl;
      return -1;
    }
    
    cap1.set(CV_CAP_PROP_FPS, 15);
    cap2.set(CV_CAP_PROP_FPS, 15);
    
    // Values taken from output of Version 1
    cap1.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
    cap2.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 480);    
    
    double dWidth1 = cap1.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight1 = cap1.get(CV_CAP_PROP_FRAME_HEIGHT);
    double dWidth2 = cap2.get(CV_CAP_PROP_FRAME_WIDTH ...
(more)
2014-11-13 16:02:51 -0600 asked a question Multiplecameras and using set() for frame's width and height result in libv4l2 error

Hello!

I have two Logitec Pro 9000 webcams connected to my 64bit Debian notebook. I want to do some stereo-vision and so far it is working great (stereoCalibrate etc.). As you know if the data output is too great (resolution and framerate) you get (especially when using USB cameras) the infamous error:

libv4l2: error turning on stream: No space left on device
VIDIOC_STREAMON: No space left on device
ERROR: Could not read from video stream

I have discovered that for some reason that

cv::VideoCapture::set(CV_CAP_PROP_FRAME_HEIGHT, some_value)

and

cv::VideoCapture::set(CV_CAP_PROP_FRAME_WIDTH, some_value)

don't seem to be working. If I run 2 VideoCapture-s simultaneously I see that the default resolution for these two cameras is 640x480, which I see when I call the cv::VideoCapture::get() equivalent of those two methods I mentioned above. I have to limit the framerate to 15 in order to avoid that libv4l2 related error. Below I have two versions of a very simple program where I set my framerate to 15 and in the second version I also set the exact same resolution that I get when I call cv::VideoCapture::get(CV_CAP_PROP_FRAME_WIDTH/HEIGHT) in the first version:

  • Version 1 (working, no manual setup of the resolution, only framerate):

    VideoCapture cap1(1);
    VideoCapture cap2(2);
    
    if(!cap1.isOpened())
    {
      cout << "Cannot open the video cam [1]" << endl;
      return -1;
    }
    
    if(!cap2.isOpened())
    {
      cout << "Cannot open the video cam [2]" << endl;
      return -1;
    }
    
    cap1.set(CV_CAP_PROP_FPS, 15);
    cap2.set(CV_CAP_PROP_FPS, 15);
    
    double dWidth1 = cap1.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight1 = cap1.get(CV_CAP_PROP_FRAME_HEIGHT);
    double dWidth2 = cap2.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight2 = cap2.get(CV_CAP_PROP_FRAME_HEIGHT);
    
    cout << "cam[1] Frame size: " << dWidth1 << " x " << dHeight1 << endl;
    cout << "cam[2] Frame size: " << dWidth2 << " x " << dHeight2 << endl;
    namedWindow("cam[1]",CV_WINDOW_AUTOSIZE);
    namedWindow("cam[2]",CV_WINDOW_AUTOSIZE);
    
    while(1)    
    {    
      Mat frame1, frame2;    
      bool bSuccess1 = cap1.read(frame1);    
      bool bSuccess2 = cap2.read(frame2);
    
      if (!bSuccess1)
      {
        cout << "Cannot read a frame from video stream [1]" << endl;
        break;
      }
    
      if (!bSuccess2)
      {
        cout << "Cannot read a frame from video stream [2]" << endl;
        break;
      }
    
      cv::addText(frame1, "cam[1]", Point2f(10,25), cv::fontQt("FONT_HERSHEY_SCRIPT_SIMPLEX", 15, cv::Scalar(255,0,0)));
      cv::addText(frame2, "cam[2]", Point2f(10,25), cv::fontQt("FONT_HERSHEY_SCRIPT_SIMPLEX", 15, cv::Scalar(255,0,0)));
      imshow("cam[1]", frame1);
      imshow("cam[2]", frame2);
    
      if(waitKey(30) == 27)
      {
        cout << "ESC key is pressed by user" << endl;
        break;
      }
    }
    
  • Version 2 (not working, manual setup framerate, manual setup of the resolution results in error I've mentioned at the beginning of my question once I try to capture from the second camera):

    VideoCapture cap1(1);
    VideoCapture cap2(2);
    
    if(!cap1.isOpened())
    {
      cout << "Cannot open the video cam [1]" << endl;
      return -1;
    }
    
    if(!cap2.isOpened())
    {
      cout << "Cannot open the video cam [2]" << endl;
      return -1;
    }
    
    cap1.set(CV_CAP_PROP_FPS, 15);
    cap2.set(CV_CAP_PROP_FPS, 15);
    
    // Values taken from output of Version 1
    cap1.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
    cap2.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 480);    
    
    double dWidth1 = cap1.get(CV_CAP_PROP_FRAME_WIDTH);
    double dHeight1 = cap1.get(CV_CAP_PROP_FRAME_HEIGHT);
    double dWidth2 = cap2.get(CV_CAP_PROP_FRAME_WIDTH ...
(more)
2014-11-13 14:22:40 -0600 commented answer opencv2/core/ultility.hpp not found

You do realize that sometimes an upgrade is not possible/desirable, right?

2014-11-13 14:20:47 -0600 commented answer opencv2/core/ultility.hpp not found

You do realize that sometimes an upgrade of the system is not desired, right? :)

2014-10-22 15:49:02 -0600 received badge  Student (source)
2014-08-26 08:56:48 -0600 received badge  Necromancer (source)
2014-08-26 08:47:32 -0600 answered a question jar file not found after in build/lib at opevcv-2.4.9

I only know where the opencv-249.jar is located if you have built from source. It is inside the BUILD_PATH/bin, where BUILD_PATH is the folder where you have compiled your binaries. For some unknown reason when invoking the make install command this file is simply left there hanging and not copied anywhere in the usual library folders. I have no idea why that is but it is a fact is a fact. I tried to find the JAR file anywhere on my system after the installation was complete using both find and locate but without success. It was only found inside the BUILD_PATH folder.

2014-08-01 05:26:41 -0600 received badge  Necromancer (source)
2014-08-01 05:24:58 -0600 commented answer Creating a panorama from multiple images. How to reduce calculation time?

I've read it along with a lot of others but thanks for the headsup. :)

2014-08-01 05:22:34 -0600 answered a question problem install opencv on linux mandriva

Although I found this way too late I will give my two cents on this. Basically there are two types of packages no matter which Linux distro you are using - the normal ones that are required when you run an application and the development (<package-name>-dev) ones that required when you build an application. In your case you are missing all the dev-packages for the corresponding libraries. The dev-packages also provide headers, which are called upon invoking cmake or whatever building too you are using.

2014-06-13 10:32:26 -0600 commented answer how to remove trackbar from opencv image window?

@David Jhones, what do you mean by "visible" property? I've looked in the HighGUI reference and the only things that are associated with a trackbar are cv::createTrackbar, cv::getTrackbarPos and cv::setTrackbarPos. As far as I know you cannot change anything but the state (as in slider-value, button-value) of a GUI element in HighGUI. Please explain. Creating a named windows inside the infinite loop and each time adding a new trackbar to it is a possible solution but it's quite ugly.

2014-06-12 08:58:08 -0600 answered a question Image Stitching (Java API)

The problem here is not only in those diagonal matches (incorrect obviously) but also in most of the matches, which are pretty bad. In order to see how bad the resulting homography is check the answer here and here. Note also these things:

  • your scene is pretty low on texture. You have many flat objects (planes!) with overall the same colour (LCD screen -> black, wall -> white-blueish, table -> brownish). Try adding some additional objects (a flower maybe? :)) and see how this affects the homography estimation
  • you've added a bag in the second image in the overlapping area, which omho might lead to some confusion
  • try ORB or some other feature detector
  • try BruteForce matcher (I rarely use the FLANN since BF usually does the trick); try different settings for the cross-check matching (BFMatcher's parameter). Using cross-check means that feature A from image 1 is matched with feature A' from image 2 and vice versa, which often reduces the number of false positives greatly
  • try another filter for your good matches (note that appyling multiple filters for example (cross-check + min/max distance) might reduce your matches so much that RANSAC fails to estimate a homography) such as the ratio test; there are also other match filters online
2014-05-01 06:22:13 -0600 commented answer Creating a panorama from multiple images. How to reduce calculation time?

Read my comment above :)

2014-05-01 06:21:47 -0600 commented answer Creating a panorama from multiple images. How to reduce calculation time?

And here's an additional note I'd like to add: as for the stitcher itself be very careful what type of images you want to use it on. And with type I mean how the images were shot. The stitcher as it is right now presumes that the camera that has shot those has undergone pure rotation (rotation around it's own axis which is the case with wide-view panoramic pictures). If your images were shot using translation movement (as is my case with an UAV flying over an area and taking pictures, that is a parallel movement above the scene we shoot), the results will be really bad (talking from personal experience here).

2014-05-01 06:16:16 -0600 commented answer Creating a panorama from multiple images. How to reduce calculation time?

You can also look into the source of the stitcher and create your own that is using threads or similar (OpenMP, Boost etc.). The stitching pipeline actually offers plenty of space for optimization and right now I'm actually working on my thesis for stitching multiple aerial images together, georeferencing and orthorectifying them. For example in the registration stage you have 1)load image in cv::Mat container, 2)undistort using calibration matrix, 3)convert to grayscale, 4)resize to medium resolution and 5)find features. This routine is to be applied for each and every image (2 or N images for that matter). After that the matching process itself can be optimized for multiple images.

2014-04-27 18:34:19 -0600 commented answer Problem with using Qt Functions '"createButton" of OpenCV2

Just a small update - this is STILL the case. I guess the OpenCV module for ROS is compiled in the same way as the OpenCV packages in the official Ubuntu repositories - only basic functionality enabled. For all who read this I advice you to compile OpenCV yourself with the features you want enabled (OpenGL support, Qt support etc.), build a custom ROS package and install it. This of course forces you to do this over and over again but I guess if you stick to only major version changes in OpenCV you won't be bothered that often. I'm quite surprised that the ROS team does that. I can understand why for example OpenCL or CUDA support is not enabled but Qt support? Especially when we consider how much of the ROS UI is actually implemented in Qt.

2014-04-22 04:57:38 -0600 received badge  Editor (source)
2014-04-22 04:43:18 -0600 asked a question Image stitching - why does the pipeline include 2 times resizing?

Hi all!

I have been working on a project involving image stitching of aerial photography. The stitching pipeline that is given in the documentation of OpenCV I actually encountered many different books and papers and frankly it makes perfect sense (http://docs.opencv.org/modules/stitching/doc/introduction.html). Except for one thing. In the two stages presented there (image acquisition being the first out of three but no point including it there) - registration and composition - I encounter resizing first to a medium and then to low resolution. Can someone explain to me why that is? Does the resizing in the registration stage has to do anything with the feature extraction? The only thing that makes sense to me in all this is that we obviously need the same resolution for all images in an image stitching. Another reason for the additional resizing this time in the composition stage is the computation of masks, which are then applied on the high resolution images that we give as input at the very beginning.

Thanks a lot for your help!

PS: Also with resolution it is obviously meant the number of pixels (since resizing is used in the stitching example), which is somewhat controversial since resolution per definition also depends on the size of each pixel and not only on their numbers as it defines the amount of detail in an image.

2014-04-16 11:18:30 -0600 commented answer opencv2/core/ultility.hpp not found

I have the same problem. The only utility.hpp is in /opencv2/gpu/device. I'm trying to run the example code for camera calibration, which uses it (https://github.com/Itseez/opencv/blob/master/samples/cpp/calibration.cpp). Using a compiled from source 2.4.8.