2019-01-25 13:37:30 -0600 | received badge | ● Self-Learner (source) |
2019-01-25 13:37:30 -0600 | received badge | ● Teacher (source) |
2014-09-30 16:31:17 -0600 | commented question | Is it possible to run filter2D at just one point in an image? Just seen a note from the documentation which suggests this will just work by cropping a 1px ROI - Note When the source image is a part (ROI) of a bigger image, the function will try to use the pixels outside of the ROI to form a border. To disable this feature and always do extrapolation, as if src was not a ROI, use borderType | BORDER_ISOLATED. |
2014-09-30 15:29:04 -0600 | asked a question | Is it possible to run filter2D at just one point in an image? I am currently running However, I am not interested in convoluting the whole image with a kernel, I just want to run the convolution at a given point. Is this possible? |
2014-09-18 15:40:39 -0600 | commented question | filter2D and sepFilter2D, specifying the value for a constant border The source for filter2D does a copyMakeBorder anyway, perhaps this is cheaper than doing a range check on every iteration as that would introduce branches? (i.e. If inside use the pixel value else use a constant value) https://github.com/Itseez/opencv/blob/5f590ebed084a5002c9013e11c519dcb139d47e9/modules/ts/src/ts_func.cpp#L787 |
2014-09-16 14:33:09 -0600 | received badge | ● Scholar (source) |
2014-09-15 14:51:56 -0600 | answered a question | Does using Gabor Energy disregard the sign of the Gabor kernel? From looking at the some sample source code I realise my mistake. https://github.com/juancamilog/gpu_convolve_test/blob/master/gpu_convolve_test.cpp The convolution step sums the multiplication of corresponding pixels, which takes into account the sign of the Gabor kernel. |
2014-09-15 14:17:57 -0600 | commented answer | how to extract gabor feature using opencv? What are you supposed to do with |
2014-09-15 14:13:26 -0600 | asked a question | Does using Gabor Energy disregard the sign of the Gabor kernel? I am looking into feature extraction using Gabor filters. I belive the steps are:
I think I must be missing something since if that was the case then the sign of the Gabor kernel would not matter. For example if a pixel had a intensity of 0.1 and it was convoluted with a Gabor kernel with a corresponding pixel value of 0.5 then the output would be This would be the same if the gabor kernal had a value of -0.5 Therefore it would not matter what sign the Gabor kernels would be, therefore these 2 kernels would be effectively identical.
|
2014-09-13 16:26:16 -0600 | received badge | ● Nice Question (source) |
2014-09-13 06:52:30 -0600 | asked a question | Why is the default Gabor phase offset 90 degrees? This is using the default ( This is using a zero phase offset So it seems that default phase offset (90 deg) removes the symmetry of the Gabor kernel. I have seen some other references where they use the same offset so I guess its standard conversion. Why is this the default? Is it generally more useful to have this for feature extraction? |
2014-03-19 12:29:07 -0600 | commented question | How do I pass ownership of pixel data to cv::Mat Thanks :) I was just frustrated since I asked the question then figured it out. Ill post the code once I confirm it works. |
2014-03-19 11:30:37 -0600 | commented question | How do I pass ownership of pixel data to cv::Mat Ah I think I am being an Idiot... Instead of allocating the buffer myself, then doing the external framework operation then passing it in to the Mat constructor I can get the Mat constructor to allocate the buffer for me then pass that to the external framework. That way the buffer will be memory managed by the Mat |
2014-03-19 11:19:23 -0600 | commented question | How do I pass ownership of pixel data to cv::Mat Good suggestion - that would do the job. However, the reason why I have external data is because I want to see if Apple's Accelerate framework speeds up a affine transform. So Ideally, I would like to avoid the copy. |
2014-03-19 10:59:24 -0600 | asked a question | How do I pass ownership of pixel data to cv::Mat I am creating a cv::Mat passing in pixel data that I have allocated externally. I would like the cv::Mat to take ownership of the bytes (i.e. create a refCount and free the bytes when it reaches zero) . However the documentation says
Is there a way to pass ownership? |
2014-01-15 14:37:55 -0600 | asked a question | What are the high level steps to classify a facial expression? I want to tell if a face in an image is smiling or winking or shouting etc... I have made an program that works quite well that does the following:
Are there other method I can be using? I have seen SVM and Gabor vector mentioned elsewhere but I don't know how they work. |
2013-12-22 07:24:36 -0600 | asked a question | How do I reduce the effects of lighting conditions for a face expression classification system? I have started making a system of telling what expression a given face has. This is my method: TRAIN CLASSIFIERS
USE CLASSIFIERS
So far it has worked quite well, however, it is a little sensitive to lighting conditions. I want add some kind of filter after before training and classification to reduce the effects of lighting conditions, but I am not sure what ones will work best:
What filters might improve the results? or are there any other way too improve my face recognition system? |
2013-12-19 16:12:34 -0600 | commented answer | Convert old haarcascade file to new cascade classifier format Hello - I am also trying to convert the old format into the new format, however, this answer still uses the |
2013-12-14 15:04:51 -0600 | asked a question | Why does opencv_traincascade not ignore nodes with a false alarm rate of 1 Im using opencv_traincascade with a one stage classifier. I don't really know how it works but it seems like it guesses at rectangles ('features' in CV terminology?) to try and divide the positive samples from the negative sample.
Is my understanding correct? My output is looks like this: Why does it not ignore feature/node/rectangle number 1 and number 2 since they appear to simply let though everything? |
2013-10-22 17:33:48 -0600 | commented question | What object detention algorithms used by Open CV are patented? @stereomatching thanks for your reply. Forgive my ignorance, what are SURF and SIFT? Are they used as part of any object detection algorithms? |
2013-10-22 16:51:15 -0600 | asked a question | What object detention algorithms used by Open CV are patented? I read here that the HAAR tree based detection might be covered by the patents: http://rafaelmizrahi.blogspot.co.uk/2007/02/intel-opencv-face-detection-license.html What other algorithms are patented? What about Local binary patters? |
2013-10-22 15:36:25 -0600 | commented question | CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale It worked. Got my answer in at last! |
2013-10-22 15:35:48 -0600 | answered a question | CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale The problem was that the image provided was at a different scale to the image that was detected. A classifier has its native window size that it must detect at. This can be exposed publicly in subclass like so: Then its just a case of resizing the image to the size of this window: This this fixes the issue 90% of the time, however, sometimes it still wont fully detect the image. I think this is due averaging / merging of a few overlapping rectangles. |
2013-10-18 13:05:15 -0600 | commented question | CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale @barak - It says I have to wait 1 more day before I can answer it. It was really frustrating because it didn't tell me until I hit submit and it deleted my (quite detailed) answer. But thanks for the up vote :) |
2013-10-18 05:32:26 -0600 | received badge | ● Student (source) |
2013-10-18 05:23:08 -0600 | commented question | CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale I have figured this out but I cant post an answer because I am a new user :( |
2013-10-17 17:21:02 -0600 | commented answer | LBP based Face Detection Hello @venky, how do you specify the number of feature to use at each stage? I could not see this in the documentation. http://docs.opencv.org/doc/user_guide/ug_traincascade.html |
2013-10-17 15:27:14 -0600 | received badge | ● Editor (source) |
2013-10-17 10:51:45 -0600 | asked a question | CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale I want to make a function that just runs the classifier once using This was my attempt (I had to make a subclass since runAt is protected). To test this out I got the result of a successful The results were: I.e. no success (1) results. This should always return success since the same classifier has already found the result in that the exact rectangle. Does anyone know what I have done wrong? |
2013-10-17 10:22:56 -0600 | asked a question | Open CV traincascade gets stuck with a Hit rate of 1 and a False alarm rate of 0 I want to train a LBP classifier. I have 103 positive and 500 negative samples. I used almost default values, except for The classifier gets stuck at stage 2 after stage 1 got a full hit rate and a zero false alarm rate. I tried playing with the numPos, numNeg, and bt parameters, but it always gets approximately the same result. I also tried a HAAR classifier, it got a little further (stage 4) but it got stuck in the same way eventually. I don't know enough about this but my guess is that its complete after stage 1, but it still trying to generate 18 more stages. My data set is quite simple, the positive should all be quite similar and the negative images are the same size.
|
2013-10-15 07:27:11 -0600 | received badge | ● Supporter (source) |