Ask Your Question

jwatte's profile - activity

2017-11-28 01:25:34 -0600 commented answer Did Opencv blending function well manage full 360 panorama ?

Another option might be to stitch the left of the panorama with the rightmost image again, to create a > 360 degree p

2017-11-28 01:25:18 -0600 commented question cv:stitching failing on very similar images?

You may want to also normalize exposure. The two images seem differently exposed, especially if you look in the inside o

2017-11-28 01:24:57 -0600 commented question Converting Hugin pto lens parameters to OpenCV compatible ones

I have the same question! For me, I will actually use a few more OpenCV functions (clustering, etc) after rectification,

2014-05-08 10:48:53 -0600 commented answer Stereo Vision Related Queries

Note that real-time geometry reconstruction seems to be "solved" in the UrbanScape DARPA project: http://cs.unc.edu/Research/urbanscape/ http://cvg-pub.inf.ethz.ch/WebBIB/papers/1900/002_fulltext.pdf and more. Unfortunately, I've only found the papers, not the code.

2014-05-07 16:25:03 -0600 commented answer Error reading video using VideoCapture while applying MOG

The comment didn't explain why this was needed, and the requester didn't take the advice the first time, as can be seen in the code where only the first backslash is changed. Explaining "why" is the exact difference between a "comment" and an "answer."

2014-05-07 04:38:22 -0600 received badge  Necromancer (source)
2014-05-07 04:37:02 -0600 received badge  Nice Answer (source)
2014-05-07 04:34:35 -0600 received badge  Necromancer (source)
2014-05-07 03:12:49 -0600 received badge  Teacher (source)
2014-05-06 22:36:31 -0600 answered a question Problem with imshow() to get live disparity map of 2 videos

First of all, you are passing the arguments in different order in the two calls to imshow().

Second, I found that my disparity map looked faint until I re-mapped it before showing. Here's the code I'm using:

disp = cv2.normalize(sgbm.compute(ri_l, ri_r), alpha=0, beta=255, \
    norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
2014-05-06 22:34:22 -0600 answered a question disparity map values

Here is code I use to re-normalize the disparity map for display:

disp = cv2.normalize(sgbm.compute(ri_l, ri_r), alpha=0, beta=255, \
    norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
2014-05-06 15:08:27 -0600 answered a question Error reading video using VideoCapture while applying MOG

Single backslashes will be parsed by the compiler as character escapes (similar to "\n") and will be thrown away. You need to make EVERY backslash be a double backslash for the string that the compiler sees after parsing to make up a valid file path.

2014-05-06 15:03:57 -0600 answered a question Distance measurement using canny edge detector.

If you need micrometer precision, what is the resolution of your camera? Even with subpixel sampling to estimate "true" edge locations, a 1000 pixel wide picture could only cover a few millimeters for micrometer resolution to be possible. Is that what you're using? What kind of "rails" are these? Railroad rails in a forest? To get micrometer precision for standard 1.5 meter rail tracks, wouldn't you need a picture that's several hundred thousand pixels wide? (assuming subpixel fitting, else even more.)

2014-05-06 14:59:55 -0600 commented question videostab sample problem

How long have you waited? What if you start with a shorter video?

2014-05-06 14:57:16 -0600 commented answer FlannBasedMatcher returning different results

If you use srand() (or whatever the initialization function is for the random number generator) you can make it deterministic.

2014-05-06 14:53:29 -0600 received badge  Editor (source)
2014-05-06 14:51:25 -0600 answered a question Stereo calibration: problem with projection matrices and SGBM

The resulting disparity map has almost no definition

That confused me, too. The disparity map needs to be converted/scaled for 8-bit grayscale before it will have suitable contrast.

Separately, a reprojection error of 4.0 is kind-of big. With about 50 input calibration images, I can get two non-synchronized webcams to a re-projection error of 0.7 or better. If using detect-checkerboards, it's important to reject or re-arrange points from images where the order of points is spatially different. Also, the checkerboard detector sometimes gets confused and switches rows on the checkerboard in some images; manual checking of each pair and only using "good" pairs helps.

2014-05-06 14:44:06 -0600 asked a question Is there a way to create a depth map from stereo using a method other than disparity maps?

When trying to use disparity map on calibrated stereo images, performance for general, un-conditioned scenes is poor. In general, I think the problem may relate to the fact that disparity maps have no global knowledge, and just try to find "matching clusters of pixels" along each epiline.

If I know that "up" is vertically up, and I know there is supposed to be a ground plane 25 centimeters below the camera center, it should be possible to do much better. For example, start by projecting the ground plane pixels out of the image, to find matches, and map all of those to a ground plane. Then look at the parts that don't match, and start matching them up using spatial heuristic reasoning based on where the "gaps" are in the ground plane, or patch-based disparity instead of epiline based disparity.

Is there anything in OpenCV that could help me implement an algorithm like this? For example, is there a function that could take as input a pair of images, and an assumed ground plane (in camera-relative coordinate space, or some other space) and figure out which parts of the image correspond to that ground plane, and which ones don't?