Ask Your Question

ricor29's profile - activity

2016-09-18 09:12:26 -0600 commented question Background model technique with disparity map

In general I find the RGB or greyscale isn't robust enough for my purposes and is one reason why I moved to a stereo camera. The 3-D geometry makes it possible to threshold out certain planes not of interest to me. I had hoped that there would then be some standard way to use a background model with 3-D data.

2016-09-09 10:53:15 -0600 asked a question Background model technique with disparity map

If I have generated a disparity map or depth map using StereoBM then what is the best way to look for change detection in the scene? I've tried using MOG2 and other background models but in general there are just too many fluctuations (speckle noise) in the disparity map to make this viable. Is there a better way in opencv to look for change detection from stereo data?

2015-10-18 11:15:39 -0600 commented question initundistortrectifymap line 103 and 137 what is going on?

I think I was just being a bit dense and may have an answer after staring at this all afternoon).

I think the documentation is just a more specific case of the implementation in that it doesn't explicitly handle skew in the intrinsic matrix. However the first two equations are just the inverse of the intrinsic matrix (with skew set to zero) and explicitly written out. The third line then handles the rotation. My matrix maths is a bit rusty but I think:

[u v 1]^T = A R [x y 1]^T (camera to pixel coords)

inv(AR) [u v 1]^T = [x y 1]^T (pixel to camera)

inv(R)inv(A) [u v 1]^T = [x y 1]^T

I convinced myself with Matlab and some dummy numbers for fx, fy, cx and cy that inv(A) gives exactly equation 1 and 2. The third equation ... (more)

2015-10-18 09:55:58 -0600 received badge  Student (source)
2015-10-18 06:02:06 -0600 received badge  Editor (source)
2015-10-18 06:01:19 -0600 asked a question initundistortrectifymap line 103 and 137 what is going on?

Hi,

I'm having trouble understanding a line in the original source code of the function initUndistortRectifyMap(..). In the corresponding docs this part doesn't seem to be mentioned.

The code is on line 103 and 137 of the following undistort.cpp function:

link text

It appears to be taking the product of the camera intrinsic matrix A and multiplying it with the rotation. It then takes the inverses (all on line 103). This is then used on line 137 when bits are extracted out from the result on line 103. The results I get when using this code are excellent but I just can't understand it or tie it into the documentation at:

link text

In particular I don't see how the first three lines of equations in the doc corresspond to the inverse of the cam matrix A and the rotation matrix?

image description

Can some clever person put me right or point me at a doc that just explains that bit? Thanks

2015-05-13 12:55:43 -0600 commented question Where is python source code/bindings - want to look into StereoBM?

Thanks Berak that answered it for me.

2015-05-12 02:45:21 -0600 asked a question Where is python source code/bindings - want to look into StereoBM?

I'm really struggling to find where the python source code of opencv is. For c++ I can find it in the modules directory but I'm stumped when it comes to the python. I've got the .pyd file that I think is like a dll and enables me to use the code but that won't let me look at the code or the bindings.

In particular I want to understand how to set values in cv2.StereoBM such as min disparity and uniqueness ratio. Currently I'm only able to use the presets.

Thanks

2015-04-28 12:58:17 -0600 answered a question CamShift on grayscael

I watched a lecture that once said for meanshift (so pretty similar to camshift) only using the pixel intensity from a grayscale image was not sufficient. Previously I tried implementing meanshift with just grayscale and it didn't work (just ended up chasing the object around the image). Loads of camshift examples out there try to track a colour ball but then the colour bit is unique enough. For a standard grayscale image from a normal camera one thing to do would be to augment the pixel intesnisty with image gradient and orientation information (easily obtained with Sobel filters).

However, you mention a time-of-flight camera and in that situation I have no idea. Hopefully somebody with a bit more insight may be able to help.

2015-04-28 12:47:54 -0600 received badge  Scholar (source)
2015-04-02 14:06:59 -0600 asked a question Meanshift/Camshift just on segmented foreground?

Hi,

At the moment I'm detecting movement by segmenting foreground and background. This gives me blobs which after I've tidied up a bit I'm trying to run camshift on. My probably very silly question is should I run camshift on the backprojected foreground image (with the initial window around the blob) or in the original unsegmented backprojected image (again with the initial window around the blob location)?

Theoretically which one is better? The foreground image seems best to me as the target will most likely be surrounded by low probability pixelsand hence much encouragement for the gradient ascent to the true target in the new image. However, what do people normally do and why?

Thanks

2015-04-02 13:49:54 -0600 received badge  Supporter (source)
2015-04-02 13:49:08 -0600 received badge  Critic (source)
2015-04-02 13:49:07 -0600 asked a question What mathematically is back projection?

I'm able to use openCV backprojection and I'm also able to implement it myself. However, I don't really understand why it works.

On the obvious side it is just building up a histogram of a target image, creating a probability distribution with it and then applying that pdf to a new image. I believe this is done in the hope that the new back projected image will only show the target information with high probability in the backprojected image.

However, in the docs page ([http://docs.opencv.org/doc/tutorials/...]) it says:

"In terms of statistics, the values stored in BackProjection represent the probability that a pixel in Test Image belongs to a skin area, based on the model histogram that we use."

I'm really struggling to interpret this and in particular the "represent the probability". There must be some formula that specifies it like:

prob("Pixel is from test image" | "New image pixel") = ?????

I just can't get my head around it though. Does anybody have any links or a good explanation of what the terms in the equation are?

Many thanks