2019-11-21 20:33:41 -0600 | received badge | ● Nice Answer (source) |
2017-06-30 04:51:49 -0600 | answered a question | How to Print Pixel Color Value C++ You don't have to give the x,y coordinates as Point. Just give the x,y values. |
2017-06-30 04:46:47 -0600 | answered a question | How to Print Pixel Color Value C++ |
2017-06-30 04:39:01 -0600 | answered a question | How to Print Pixel Color Value C++ { int pixel_value = 0; Mat image = Mat(100, 100, CV_8UC1, cv::Scalar(255)); } |
2017-03-21 23:39:09 -0600 | commented question | Opencv Face Detection Assign id for each face Here's a C++ implementation to give you an idea so you can port it to python. |
2017-03-20 01:37:54 -0600 | commented question | Opencv Face Detection Assign id for each face You will need a combination of a prediction algorithm and an assignment algorithm. A kalman filter+Hungarian algorithm for example. |
2017-03-17 05:18:08 -0600 | answered a question | I have the codes for live video and finding contours separately.I would like to know how both the codes can be stitched together to find contours in a live video Instead of finding it on a single image, you have to find the contours on all the frames on the video. What you've done in main() in the second program has to be done in a loop. This will be the main function: |
2017-02-16 01:00:56 -0600 | received badge | ● Critic (source) |
2017-02-10 23:24:11 -0600 | asked a question | Issue with Template Matching Hello, I'm working on a task where I'm required to track multiple faces. I'm using HaarCascades for face detection and Kalman filter and Hungarian Algorithm for tracking and assigning. This works fine for the most part. In situations where Haar fails to detect, like cases where the person is looking sideways etc, I used template matching using the last detection as the template to detect the face instead of using the predicted values of the Kalman filter as this resulted in many erroneous tracks and affected the assignment. I'm updating the template after each frame, if Haar succeeds then the detected face acts a new template in the next frame otherwise the face detected through template matching acts as the template. This also works well for the most part. The issue I'm facing is: 1) the template looks like it stops updating itself at times and the background becomes the template and this results in a false detection. Here's what I mean: 2)In the case of occlusion, template matching fails as well. If an occlusion occurs I store the last detection and tell the tracker to follow Kalman's prediction for about x(I've been trying with a high value like around 60) number of frames till it gets a match with the last known detection as the template or Haar detection otherwise the track gets removed. The track moves according to Kalman's prediction but doesn't find a template match(I've checked if the original template is right and it is but still leads to false detection) and the track gets assigned to some other face that was closer to its path if a Haar detection occurs or the track is removed. And/Or the same problem I mentioned in the previous point also happens, the template looses the face and becomes part of the background and the track just stays floating at that point. Are the problems I'm facing because of Template Matching? Should I consider using a feature matching technique instead? Or is it my implementation? I know this is a really long post and a very specific problem. I'd really appreciate any advice and tips! Thank you!! Oh and: -The blue box in the photo I've posted is the search window for the template matching, instead of searching the entire frame. I've tried it without the window and the same problem occurs. framek is the current frame and previous_frame is the previous frame both coming from the main function. -The 'rects' and 'detections' come from the Haar detections(the rects and center points of a face detection) |
2017-02-09 22:42:16 -0600 | commented answer | How to find the hsv range of green gloves in a particular image? Approximately:Hmin-44,Hmax-71;Smin-54,Smax-255,Vmin-63,Vmax-255. |
2017-02-09 06:25:02 -0600 | answered a question | how can read a video from file, select ROI in first frame with rectangular and track obj? You need to call your mouse call back function outside the loop. Capture the first frame, call the mouse callback function to select your ROI, initialize your tracker and in the main function, get the second frame and update your tracker. OpenCV has examples for different trackers. Here's an example. But here's the code for selecting ROI |
2017-02-08 23:07:59 -0600 | answered a question | How to find the hsv range of green gloves in a particular image? Try this code. These are the results I got for your images: |
2017-01-09 23:08:13 -0600 | commented question | Background subtraction to detect cars on a road haha oops, sorry! I meant Yasser |
2017-01-09 04:50:41 -0600 | commented question | Background subtraction to detect cars on a road the link you've mentioned, what was the issue you were facing with that code? |
2017-01-09 04:37:55 -0600 | commented question | Roboust Human detection and tracking in a crowded area @hoang anh tuan Well, after a lot of trying and testing, I dropped the background subtraction method for my case. I'm just tracking faces instead |
2017-01-09 04:36:03 -0600 | commented question | Contours altering threshold image. it is working for me. can you post the screenshots of your results? |
2017-01-09 03:40:15 -0600 | commented question | Making image noise free |
2017-01-05 22:43:01 -0600 | commented answer | Hand detection not working. The only part of my code that returns -1 is the part that opens the camera. I was using a usb cam so used VideoCapture capture(1). It should be 0 if you're using the webcam. Maybe that's the issue? |
2017-01-05 06:30:45 -0600 | received badge | ● Teacher (source) |
2017-01-05 05:16:11 -0600 | commented answer | Why does the foreground image from background subtraction look transparent? I've been avoiding HOG because we have a whole bunch of processing on the detected person(age detection, gender detection, gesture detection) which are all, already computationally expensive and we need to run this code on a GPU. And also, HOG doesn't always detect people when only half their body is in frame |
2017-01-05 05:01:38 -0600 | commented answer | Why does the foreground image from background subtraction look transparent? This is what I'm working on. With tracking into play, I think I need some other heuristic especially if the shapes of my blobs change from one frame to another and the morphological operations don't always give a single blob. Anyhoo, Thanks for your help!:) |
2017-01-05 04:02:07 -0600 | commented question | Why does the foreground image from background subtraction look transparent? I know but this acts as another argument, a strong one, against not using background subtraction for human detection. That is, of course, if there is no solution to overcome this problem right? |
2017-01-05 03:47:09 -0600 | commented question | Why does the foreground image from background subtraction look transparent? Ah! Can't believe I hadn't thought of that! I feel silly now! Thanks anyways! |
2017-01-05 02:50:53 -0600 | asked a question | Why does the foreground image from background subtraction look transparent? Hello! I've been working on background subtraction techniques to detect and track humans in a scene. Because I don't want the background to change, and just subtract the frames from the first reference frame, so I've tried both the OpenCV MOG2(with learning parameter set to zero) and using the first frame and using the absdiff() to find the difference. But the foreground images I get from both techniques looks transparent as in, part of the background can be seen through the person as shown below: Results with the absdiff() technique: -Original Image: -Foreground Image: Results with the MOG2: -Original Image: -Foreground Image: And this is the background(reference) image for both the methods: Does anyone know why this happens? I need to detect and track the people for which I find the blobs and because of the transparency the blobs are pretty much detected as two blobs which messes up everything else. Thanks in advance! |
2017-01-05 01:38:21 -0600 | commented answer | How to eliminate people's shadow or light noise in background subtraction? Yup,threshold(). A value of around 200 for the thresh parameter and 255 for the maxVal parameter would work. |
2017-01-04 23:29:24 -0600 | answered a question | How to eliminate people's shadow or light noise in background subtraction? If you keep the shadow detection parameter as true, you'd get a gray area like this: Threshold this image to remove the gray areas and make it black. As for removing the noise, you'll have to try different morphological operations like dialation and erosion to get rid of it. This is the result for the image above: |
2017-01-02 23:52:22 -0600 | answered a question | Hand detection not working. Try this code. It definitely works with a white background or any other color that isn't hard to separate from skin color using HSV thresholding. If you don't want it to work for only certain backgrounds then you should use some sort of skin detector first to get just the hand region and then find the contour. I've commented out the largest contour area part because I found for lighting conditions I tested it in, the code worked better without that part. You can experiment with that and the morphological operations to see what suits your condition the best. Seen my answer here to a similar question to see the results of this code for static images. |
2016-12-29 22:58:33 -0600 | commented answer | Problem with HSV @Nuz, check the edit in my answer |
2016-12-29 01:54:11 -0600 | answered a question | tracking movement in a You could use background subtraction to detect the cat(blob detection) and use a tracking algorithm like Kalman filter to keep track of the detection. As for counting, I guess you could try something like what's done in this video. He's drawn a line at the center of the screen and keeps count of the number of times a blob crosses that line(in your case it would be a vertical line near the side of the screen). Actually, he uses background subtraction and a different algorithm for keeping track of the blob so you could check those out as well. |
2016-12-29 01:22:31 -0600 | answered a question | Problem with HSV
This is something like what you want, right? It has a lot to do with the original image you've chosen as well. The illumination effects make it hard to get just the skin regions properly in your image, that's why your threshold image isn't too good. edit: I guess this is the closest you'll get to what you want with your image. The two legs aren't detected separately because the original image isn't of the best quality. If this is the image you absolutely have to use then keep trying different thresholds and morphological operations and then find the contour with the largest area. |
2016-12-29 00:01:19 -0600 | commented question | Problem with HSV you need to threshold the HSV image to get only the skin regions and then find the contours |