Ask Your Question

jappoz92's profile - activity

2017-09-12 04:34:19 -0500 received badge  Student (source)
2017-09-12 04:31:32 -0500 commented answer orientation angle of ellipse

my version is 3.2

2017-09-12 04:21:35 -0500 marked best answer orientation angle of ellipse

Hello everyone, I am having some trouble in understanding how the angle parameter of the ellipse function actually works. In the documentation it is meant to be anti-clockwise and referring to the main axis. Therefore, for instance, if I try to draw an ellipse of size (100,50) and angle 45 deg I expect it to be in the first quadrant while instead it is in the second.

For instance this:

ellipse(im, Point(im.cols/2, im.rows/2), Size(100, 50), 45, 0, 360, Scalar(200,0,0));

leads to the image below.

Of course if I switch the axis the orientation gets correct but this seems to be in contrast with the image shown in the documentation. (opencv drawing doc)

What am I misunderstanding?

image description

2017-09-12 04:21:35 -0500 received badge  Scholar (source)
2017-09-12 04:20:41 -0500 commented answer orientation angle of ellipse

I knew this but if they say that the rotation sense is anticlockwise this is a bit misleading, at least in my opinion. A

2017-09-12 03:57:28 -0500 asked a question orientation angle of ellipse

orientation angle of ellipse Hello everyone, I am having some trouble in understanding how the angle parameter of the e

2016-05-16 10:23:19 -0500 asked a question estimate motion of ROIs in Image

Hello everyone.
I am trying to detect in real time pedestrians with a camera mounted on a moving vehicle. My problem arises in the tracking phase.
Basically, I am using the points in the point cloud obtained from a Lidar sensor to project ROIs in the image, then the detection is actually performed in the image.
The tracking is performed in the ground plane with the point cloud. So , when I get a detection window in the image from the pedestrian detector, the roi which most overlaps that is used as reference point for tracking (I generate a ROI for each point in front of the car so I can recover the corresponding point that generated that roi easily).
The issue I get is that for the Kalman Filter to be efficient and to perform correctly data association I need several detections while my detector provides only few consecutive detections. (For instance a walking pedestrian in front of the still car would lead to three or four detections not consecutive).
For the data association to work properly this is a problem because in that case my Kalman filter, instead of keeping track of the same pedestrian, will instantiate different tracks for every detections if they are too much far one from the others.
To get rid of this problem I thought to use Motion History Images and so, once a detection arises, I start create "dummy detections" to feed the Kalman Filter by using the motion estimated with Motion History Image.
Is that feasible to be done in real time?
Moreover, How could I create the masks that are needed to create the MHI? The only things I am interested in are the motions of the ROIs in the image, not the whole image. The camera I am using is a color camera so I would like to speed up the computation by considering only the rois but the MHI needs the whole image right?
So I am confused about that and it is not clear for me how to use this MHI to do what I want.
Could someone kindly give some hint/feedback? Thank you