2020-11-06 00:25:46 -0600 | received badge | ● Notable Question (source) |
2018-03-26 11:34:06 -0600 | received badge | ● Popular Question (source) |
2016-08-09 08:32:55 -0600 | received badge | ● Necromancer (source) |
2016-06-02 01:13:25 -0600 | received badge | ● Scholar (source) |
2016-06-02 01:13:20 -0600 | commented answer | OpenCV 3.1 build from source - core test fails for 3 tests Thanks for the info. Yeah, I guess I'll have to stick to C++ with Caffe. Will also give the Matlab bindings a go. Hope it works. |
2016-06-01 16:29:48 -0600 | asked a question | OpenCV 3.1 build from source - core test fails for 3 tests Hi! I've just finished compiling OpenCV 3.1 on my 64bit Debian Jessie with a lot of features enabled (Qt, OpenGL, Matlab (I have R2014), Python2 and 3, Tesseract, Ceres etc.). Now I've started running the test. Sadly the first test fails (I haven't continued with the rest since I'd like to resolved the issues one by one). The failed tests in
Just a side-note which concerns |
2015-12-03 16:43:59 -0600 | received badge | ● Supporter (source) |
2015-12-03 16:43:59 -0600 | received badge | ● Supporter (source) |
2015-12-03 16:43:58 -0600 | received badge | ● Supporter (source) |
2015-12-03 16:43:34 -0600 | commented answer | Bindings to access openCV from MATLAB do not generate This is the answer! It works on Linux too! Thanks a lot and +1 from me. |
2015-11-29 22:50:31 -0600 | asked a question | OpenCV 3.0.0 - Cmake fails to detect Python 3 Old problem yet I got it just now. On my Debian Jessie the build process went as smooth as one can hope for. On Ubuntu 14.04 - epic fail. Even after setting the paths manually I still don't get Python 3 support (I have Python 3.4m). It might be due to this bug which was supposedly fixed. I have cloned the latest stable from the git repository so I should have gotten the fix that was posted 7 months ago. However the exact same behaviour described by the person who reported the bug happens to me too (except that in my case
Upon executing The funny thing is that inside the Any suggestions? I would really want to work with Python 3 instead of Python 2.7. |
2014-11-22 04:23:10 -0600 | asked a question | Linux and multiple USB webcams cause reduced frame resolution and v4l2 error I have two Logitech Pro 9000 webcams. I have discovered a strange behaviour in cv::VideoCapture::set() when setting the frame size (width and height) for my captures resulting in the infamous error for my second camera. In order to fix it I have to reduce my frame size almost 2 times from the initial one. Now here is the interesting thing:
This example is working without any issues.
|
2014-11-13 16:04:53 -0600 | asked a question | Multiple cameras and using set() for adjusting frame's witdth and height result in libv4l2 error Hello! I have two Logitec Pro 9000 webcams connected to my 64bit Debian notebook. I want to do some stereo-vision and so far it is working great (stereoCalibrate etc.). As you know if the data output is too great (resolution and framerate) you get (especially when using USB cameras) the infamous error: I have discovered that for some reason that and don't seem to be working. If I run 2 VideoCapture-s simultaneously I see that the default resolution for these two cameras is 640x480, which I see when I call the cv::VideoCapture::get() equivalent of those two methods I mentioned above. I have to limit the framerate to 15 in order to avoid that libv4l2 related error. Below I have two versions of a very simple program where I set my framerate to 15 and in the second version I also set the exact same resolution that I get when I call cv::VideoCapture::get(CV_CAP_PROP_FRAME_WIDTH/HEIGHT) in the first version:
|
2014-11-13 16:02:51 -0600 | asked a question | Multiplecameras and using set() for frame's width and height result in libv4l2 error Hello! I have two Logitec Pro 9000 webcams connected to my 64bit Debian notebook. I want to do some stereo-vision and so far it is working great (stereoCalibrate etc.). As you know if the data output is too great (resolution and framerate) you get (especially when using USB cameras) the infamous error: I have discovered that for some reason that and don't seem to be working. If I run 2 VideoCapture-s simultaneously I see that the default resolution for these two cameras is 640x480, which I see when I call the cv::VideoCapture::get() equivalent of those two methods I mentioned above. I have to limit the framerate to 15 in order to avoid that libv4l2 related error. Below I have two versions of a very simple program where I set my framerate to 15 and in the second version I also set the exact same resolution that I get when I call cv::VideoCapture::get(CV_CAP_PROP_FRAME_WIDTH/HEIGHT) in the first version:
|
2014-11-13 14:22:40 -0600 | commented answer | opencv2/core/ultility.hpp not found You do realize that sometimes an upgrade is not possible/desirable, right? |
2014-11-13 14:20:47 -0600 | commented answer | opencv2/core/ultility.hpp not found You do realize that sometimes an upgrade of the system is not desired, right? :) |
2014-10-22 15:49:02 -0600 | received badge | ● Student (source) |
2014-08-26 08:56:48 -0600 | received badge | ● Necromancer (source) |
2014-08-26 08:47:32 -0600 | answered a question | jar file not found after in build/lib at opevcv-2.4.9 I only know where the opencv-249.jar is located if you have built from source. It is inside the BUILD_PATH/bin, where BUILD_PATH is the folder where you have compiled your binaries. For some unknown reason when invoking the make install command this file is simply left there hanging and not copied anywhere in the usual library folders. I have no idea why that is but it is a fact is a fact. I tried to find the JAR file anywhere on my system after the installation was complete using both find and locate but without success. It was only found inside the BUILD_PATH folder. |
2014-08-01 05:26:41 -0600 | received badge | ● Necromancer (source) |
2014-08-01 05:24:58 -0600 | commented answer | Creating a panorama from multiple images. How to reduce calculation time? I've read it along with a lot of others but thanks for the headsup. :) |
2014-08-01 05:22:34 -0600 | answered a question | problem install opencv on linux mandriva Although I found this way too late I will give my two cents on this. Basically there are two types of packages no matter which Linux distro you are using - the normal ones that are required when you run an application and the development (<package-name>-dev) ones that required when you build an application. In your case you are missing all the dev-packages for the corresponding libraries. The dev-packages also provide headers, which are called upon invoking cmake or whatever building too you are using. |
2014-06-13 10:32:26 -0600 | commented answer | how to remove trackbar from opencv image window? @David Jhones, what do you mean by "visible" property? I've looked in the HighGUI reference and the only things that are associated with a trackbar are cv::createTrackbar, cv::getTrackbarPos and cv::setTrackbarPos. As far as I know you cannot change anything but the state (as in slider-value, button-value) of a GUI element in HighGUI. Please explain. Creating a named windows inside the infinite loop and each time adding a new trackbar to it is a possible solution but it's quite ugly. |
2014-06-12 08:58:08 -0600 | answered a question | Image Stitching (Java API) The problem here is not only in those diagonal matches (incorrect obviously) but also in most of the matches, which are pretty bad. In order to see how bad the resulting homography is check the answer here and here. Note also these things:
|
2014-05-01 06:22:13 -0600 | commented answer | Creating a panorama from multiple images. How to reduce calculation time? Read my comment above :) |
2014-05-01 06:21:47 -0600 | commented answer | Creating a panorama from multiple images. How to reduce calculation time? And here's an additional note I'd like to add: as for the stitcher itself be very careful what type of images you want to use it on. And with type I mean how the images were shot. The stitcher as it is right now presumes that the camera that has shot those has undergone pure rotation (rotation around it's own axis which is the case with wide-view panoramic pictures). If your images were shot using translation movement (as is my case with an UAV flying over an area and taking pictures, that is a parallel movement above the scene we shoot), the results will be really bad (talking from personal experience here). |
2014-05-01 06:16:16 -0600 | commented answer | Creating a panorama from multiple images. How to reduce calculation time? You can also look into the source of the stitcher and create your own that is using threads or similar (OpenMP, Boost etc.). The stitching pipeline actually offers plenty of space for optimization and right now I'm actually working on my thesis for stitching multiple aerial images together, georeferencing and orthorectifying them. For example in the registration stage you have 1)load image in cv::Mat container, 2)undistort using calibration matrix, 3)convert to grayscale, 4)resize to medium resolution and 5)find features. This routine is to be applied for each and every image (2 or N images for that matter). After that the matching process itself can be optimized for multiple images. |
2014-04-27 18:34:19 -0600 | commented answer | Problem with using Qt Functions '"createButton" of OpenCV2 Just a small update - this is STILL the case. I guess the OpenCV module for ROS is compiled in the same way as the OpenCV packages in the official Ubuntu repositories - only basic functionality enabled. For all who read this I advice you to compile OpenCV yourself with the features you want enabled (OpenGL support, Qt support etc.), build a custom ROS package and install it. This of course forces you to do this over and over again but I guess if you stick to only major version changes in OpenCV you won't be bothered that often. I'm quite surprised that the ROS team does that. I can understand why for example OpenCL or CUDA support is not enabled but Qt support? Especially when we consider how much of the ROS UI is actually implemented in Qt. |
2014-04-22 04:57:38 -0600 | received badge | ● Editor (source) |
2014-04-22 04:43:18 -0600 | asked a question | Image stitching - why does the pipeline include 2 times resizing? Hi all! I have been working on a project involving image stitching of aerial photography. The stitching pipeline that is given in the documentation of OpenCV I actually encountered many different books and papers and frankly it makes perfect sense (http://docs.opencv.org/modules/stitching/doc/introduction.html). Except for one thing. In the two stages presented there (image acquisition being the first out of three but no point including it there) - registration and composition - I encounter resizing first to a medium and then to low resolution. Can someone explain to me why that is? Does the resizing in the registration stage has to do anything with the feature extraction? The only thing that makes sense to me in all this is that we obviously need the same resolution for all images in an image stitching. Another reason for the additional resizing this time in the composition stage is the computation of masks, which are then applied on the high resolution images that we give as input at the very beginning. Thanks a lot for your help! PS: Also with resolution it is obviously meant the number of pixels (since resizing is used in the stitching example), which is somewhat controversial since resolution per definition also depends on the size of each pixel and not only on their numbers as it defines the amount of detail in an image. |
2014-04-16 11:18:30 -0600 | commented answer | opencv2/core/ultility.hpp not found I have the same problem. The only utility.hpp is in /opencv2/gpu/device. I'm trying to run the example code for camera calibration, which uses it (https://github.com/Itseez/opencv/blob/master/samples/cpp/calibration.cpp). Using a compiled from source 2.4.8. |