Ask Your Question

Cerin's profile - activity

2019-10-14 08:23:05 -0600 received badge  Famous Question (source)
2019-04-12 08:50:40 -0600 received badge  Notable Question (source)
2018-12-24 06:28:32 -0600 received badge  Popular Question (source)
2018-03-10 20:25:35 -0600 asked a question Unable to set video capture size

Unable to set video capture size I'm attempting to set my laptop webcam video capture size to 800 x 600 using: import c

2018-01-05 02:37:46 -0600 received badge  Popular Question (source)
2016-09-22 14:26:37 -0600 asked a question Unable to build OpenCV due to missing ninja

I'm trying to build OpenCV3, as part of a dependency for the diagnostics ROS Kinetic. My build steps are:

rosinstall_generator diagnostics --rosdistro kinetic --deps | wstool merge -t src -
wstool update -t src -j2 --delete-changed-uris
rosdep install --from-paths src --ignore-src --rosdistro kinetic -y -r --os=debian:jessie
./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/kinetic -j1

The last command builds several packages, but fails on OpenCV3 with the error:

==> Processing plain cmake package: 'opencv3'
==> Building with env: '/opt/ros/kinetic/env.sh'
Makefile exists, skipping explicit cmake invocation...
==> make cmake_check_build_system in '/home/pi/ros_catkin_ws/build_isolated/opencv3/install'
==> ninja -j1 in '/home/pi/ros_catkin_ws/build_isolated/opencv3/install'
ninja: error: loading 'build.ninja': No such file or directory
<== Failed to process package 'opencv3': 
  Command '['/opt/ros/kinetic/env.sh', 'ninja', '-j1']' returned non-zero exit status 1

Reproduce this error by running:
==> cd /home/pi/ros_catkin_ws/build_isolated/opencv3 && /opt/ros/kinetic/env.sh ninja -j1

What's causing this error? Why doesn't OpenCV3 include a Ninja build file? I've asked about this on the corresponding ROS answers site, but they have no idea what's causing this.

2015-10-23 13:10:49 -0600 received badge  Famous Question (source)
2015-10-23 13:10:49 -0600 received badge  Popular Question (source)
2015-10-23 13:10:49 -0600 received badge  Notable Question (source)
2015-10-08 03:27:11 -0600 received badge  Teacher (source)
2015-04-17 12:36:18 -0600 asked a question What's the state of support for Creative Senz3d Camera?

I was surprised to find OpenCV has documented support for the Creative Senz3d camera. I was then frustrated to find out that this support requires installation of Intel's proprietary "Intel Perceptual Computing SDK". I was then further frustrated to find that this SDK has been deprecated and removed from Intel's website and that they suggest everyone stop using the old SDK they downloaded as well as the camera they purchased and buy a new camera and use the new "Intel RealSense SDK".

At this point, how would you access the Creative Senz3d with OpenCV? Does OpenCV work with the Creative Senz3d using the new RealSense SDK?

2015-04-15 22:59:21 -0600 asked a question How to calibrate cameras for StereoBM depth map

Is there any documentation or an example on how to use/calibrate OpenCV's StereoBM with two arbitrary cameras with an arbitrary B and f? There's a simple example in the tutorials that shows some minimal code, but it doesn't explain how to configure it for your own B and f, assuming a pre-calibrated setup which likely doesn't match your own setup.

I tried running the example code on two images taken from some webcams placed 2 inches apart, and it just returns a completely gray screen, which I take to mean the default StereoBM assumes a completely different calibration.

2015-04-15 21:38:08 -0600 received badge  Critic (source)
2015-04-15 12:54:38 -0600 received badge  Scholar (source)
2015-03-26 02:56:08 -0600 received badge  Necromancer (source)
2015-03-21 11:44:32 -0600 asked a question Traditional Stereo Vision vs Xtion/Kinect

What are the pros and cons of traditional stereo vision using two commodity cameras compared to the method used by 3d sensors like the ASUS Xtion and Microsoft Kinect?

I know the Xtion/Kinect have a blind spot within a few feet of the sensor, but the device is entirely self-contained and provides immediately useful data. Whereas traditional stereo vision has a higher computational overhead and usually has to be assembled from various parts, but overall consists of cheaper parts (a couple of $5 webcams and a $35 RPi vs a $160-$270 Xtion/Kinect).

Are there any other benefits or caveats to each method? Which is more accurate?

2015-03-18 03:53:29 -0600 received badge  Student (source)
2015-03-17 15:57:14 -0600 asked a question Does camera rotation help with a stereo depth map?

I'm thinking of creating a simple dual webcam setup for testing OpenCV's stereo depth map feature. All the examples and descriptions I've seen setup the cameras in a fixed position, pointing them straight ahead with their line-of-sight perfectly parallel to each other.

Are there any cases where the cameras are mounted on separate movable axis where they can slightly rotate side to side, in order to "focus" them on a specific point closer or farther away? I'm thinking of a process similar to how the human eye rotates side to side in order to focus on different depth ranges.

Can OpenCV make use of this ability, or does it assume fixed cameras?

2014-12-05 18:04:55 -0600 asked a question How do you use OpenCV TLD from Python?

According to the release docs for OpenCV 3.0.0, it includes an implementation of the Tracking-Detection-Learning algorithm. There's even some very basic docs for the C++ code.

I downloaded and compiled the 3.0.0-beta, including the Python wrapper, and it seems to have succeeded, and although I can run most of the Python samples (some appear to be broken and/or not updated for 3.0.0), I can't find any way to access and TLD functionality through the Python wrapper. I can't even find references to it in the code.

Is it actually included in the 3.0.0 release, and if so, how do I access it?

2014-12-05 16:37:45 -0600 answered a question TLD Tracker (aka Predator)

Although I haven't used it, a TLD implementation is included in the upcoming 3.0.0 release, which is currently in beta.

2014-11-25 15:45:50 -0600 received badge  Supporter (source)