Ask Your Question

lazarev's profile - activity

2019-02-22 03:23:08 -0600 received badge  Notable Question (source)
2016-07-19 01:46:53 -0600 received badge  Popular Question (source)
2015-12-27 08:15:53 -0600 received badge  Student (source)
2013-07-17 03:21:28 -0600 received badge  Supporter (source)
2013-07-17 03:19:49 -0600 received badge  Scholar (source)
2013-07-16 09:41:46 -0600 commented answer hog detectmultiscale

Thank you for your answer. I thought that detectmultiscale use a sliding window which compare every region in my origninal image with the model. So if I understand you, if the people I want to detect is a very small part in the image (like in aerial imagery), this method can't perform very well ? (The training was also done with an aerial imagery dataset)

2013-07-16 09:15:26 -0600 asked a question hog detectmultiscale

Hello everyone,

I have a question about the hog.detectmultiscale method.

I'm trying to perform object detection using hog and svm. I understand that after computing the hog for an image size of 64x128 pixels, it returns a 3780-size descriptor. The training was done using svmlight and it returns a single 3780 + bias-size descriptor.

What I don't understand is when performing multiscale detection with a scale factor of 1.05 for example, how the comparison can be done when the detection window is bigger than the hog window.

I don't think that the descriptor computed by the detection window has the same number of elements that the descriptor computed by the hog window.

Thank you in advance.

2013-07-02 04:38:34 -0600 asked a question Object recognition from UAV imagery

Hello everyone !

I'm trying to perform object recognition from UAV videos. I've already tried machine learning stuff like HOG+SVM or Haar Cascade Classifier with more or less success.

Now i want to use image processing techniques with no learning process to see if it can work. The point is for example I have boats on the sea and i want to recognize them as well (let's say a cargo boat for now)

At first I use image segmentation and some thresholding techniques to remove the sea and then I extract contours using findcontours. I have my contour representing the cargo and I compute the hu moments which is in theory invariant in scale, rotation.

Previously, I've compute in the same way the hu moments of another cargo and then i compare them using matchShapes(). It works well when it is the same view (matchShapes() ~= 0.2) but when the camera dezoom or turn, matchShapes() gives me bad results.

My question : Is there other features than we can use for contour comparison which is more robust than hu moments (I've heard about shape context but it is not implemented in OpenCV...) or other techniques that i can apply for this application. i want that with one boat, i can recognize other boats.

Thank you very much in advance.