2019-11-21 20:33:41 -0500 received badge ● Nice Answer (source) 2017-06-30 04:51:49 -0500 answered a question How to Print Pixel Color Value C++ You don't have to give the x,y coordinates as Point. Just give the x,y values. int main() { int pixel_value1 = 0, pixel_value2 = 0; Mat image = Mat(100, 100, CV_8UC1, Scalar(127)); pixel_value1 = image.at(10, 10);//if you know the position of the pixel cout << "pixel_value1: " << pixel_value1 << endl; for (int x = 0;x < image.rows; x++)//To loop through all the pixels { for (int y = 0; y < image.cols; y++) { pixel_value2 = image.at(x,y); cout << "pixel_value2: " << pixel_value2 << endl; } } return 0; }  2017-06-30 04:46:47 -0500 answered a question How to Print Pixel Color Value C++ int main() { int pixel_value1 = 0, pixel_value2 = 0; Mat image = Mat(100, 100, CV_8UC1, cv::Scalar(127)); pixel_value1 = image.at(10, 10);//if you know the position of the pixel cout << "pixel_value1: " << pixel_value1 << endl; for (int i = 0; i < image.rows; i++)//To loop through all the pixels { for (int j = 0; j < image.cols; j++) { pixel_value2 = image.at(i,j); cout << "pixel_value2: " << pixel_value2 << endl; } } return 0; }  2017-06-30 04:39:01 -0500 answered a question How to Print Pixel Color Value C++ int main()  { int pixel_value = 0; Mat image = Mat(100, 100, CV_8UC1, cv::Scalar(255)); pixel_value = image.at(10, 10); cout << "pixel_value: " << pixel_value << endl; return 0;  } 2017-03-21 23:39:09 -0500 commented question Opencv Face Detection Assign id for each face Here's a C++ implementation to give you an idea so you can port it to python. 2017-03-20 01:37:54 -0500 commented question Opencv Face Detection Assign id for each face You will need a combination of a prediction algorithm and an assignment algorithm. A kalman filter+Hungarian algorithm for example. 2017-03-17 05:18:08 -0500 answered a question I have the codes for live video and finding contours separately.I would like to know how both the codes can be stitched together to find contours in a live video Instead of finding it on a single image, you have to find the contours on all the frames on the video. What you've done in main() in the second program has to be done in a loop. This will be the main function: int main() { VideoCapture cap(*path to your video file or 0/1 for webcam video*); for(;;) { /// Load source image and convert it to gray cap >> src; /// Convert image to gray and blur it cvtColor(src, src_gray, CV_BGR2GRAY); blur(src_gray, src_gray, Size(3, 3)); /// Create Window char* source_window = "Source"; namedWindow(source_window, CV_WINDOW_AUTOSIZE); imshow(source_window, src); createTrackbar(" Threshold:", "Source", &thresh, max_thresh, thresh_callback); thresh_callback(0, 0); if (waitKey(27) >= 0) break; } return(0); }  2017-02-16 01:00:56 -0500 received badge ● Critic (source) 2017-02-10 23:24:11 -0500 asked a question Issue with Template Matching Hello, I'm working on a task where I'm required to track multiple faces. I'm using HaarCascades for face detection and Kalman filter and Hungarian Algorithm for tracking and assigning. This works fine for the most part. In situations where Haar fails to detect, like cases where the person is looking sideways etc, I used template matching using the last detection as the template to detect the face instead of using the predicted values of the Kalman filter as this resulted in many erroneous tracks and affected the assignment. I'm updating the template after each frame, if Haar succeeds then the detected face acts a new template in the next frame otherwise the face detected through template matching acts as the template. This also works well for the most part. The issue I'm facing is: 1) the template looks like it stops updating itself at times and the background becomes the template and this results in a false detection. Here's what I mean: 2)In the case of occlusion, template matching fails as well. If an occlusion occurs I store the last detection and tell the tracker to follow Kalman's prediction for about x(I've been trying with a high value like around 60) number of frames till it gets a match with the last known detection as the template or Haar detection otherwise the track gets removed. The track moves according to Kalman's prediction but doesn't find a template match(I've checked if the original template is right and it is but still leads to false detection) and the track gets assigned to some other face that was closer to its path if a Haar detection occurs or the track is removed. And/Or the same problem I mentioned in the previous point also happens, the template looses the face and becomes part of the background and the track just stays floating at that point. Are the problems I'm facing because of Template Matching? Should I consider using a feature matching technique instead? Or is it my implementation? I know this is a really long post and a very specific problem. I'd really appreciate any advice and tips! Thank you!! Oh and: -The blue box in the photo I've posted is the search window for the template matching, instead of searching the entire frame. I've tried it without the window and the same problem occurs. framek is the current frame and previous_frame is the previous frame both coming from the main function. -The 'rects' and 'detections' come from the Haar detections(the rects and center points of a face detection) 2017-02-09 22:42:16 -0500 commented answer How to find the hsv range of green gloves in a particular image? Approximately:Hmin-44,Hmax-71;Smin-54,Smax-255,Vmin-63,Vmax-255. 2017-02-09 06:25:02 -0500 answered a question how can read a video from file, select ROI in first frame with rectangular and track obj? You need to call your mouse call back function outside the loop. Capture the first frame, call the mouse callback function to select your ROI, initialize your tracker and in the main function, get the second frame and update your tracker. OpenCV has examples for different trackers. Here's an example. But here's the code for selecting ROI #include #include using namespace cv; using namespace std; Point point1, point2; int drag = 0; Rect roi; Mat frame; void mouse(int event, int x, int y, int flags, void* param) { if (event == CV_EVENT_LBUTTONDOWN && !drag) { point1 = Point(x, y); drag = 1; } if (event == CV_EVENT_MOUSEMOVE && drag) { Mat img = frame.clone(); point2 = Point(x, y); rectangle(img, point1, point2, Scalar(0, 0, 255), 1, 8, 0); imshow("Video", img); } if (event == CV_EVENT_LBUTTONUP && drag) { point2 = Point(x, y); drag = 0; roi = Rect(point1.x, point1.y, x - point1.x, y - point1.y); } if (event == CV_EVENT_LBUTTONUP) { drag = 0; } } int main() { cout << "Select ROI and press any key to continue\n"; VideoCapture cap(0); cap >> frame;//get the first frame imshow("Video", frame); cvSetMouseCallback("Video", mouse, NULL);//call the MouseCallBack function to select ROI /*initialize your tracker*/ waitKey(0); while (1) { cap >> frame;//capture second frame /*Update your Tracker*/ rectangle(frame, roi, Scalar(0, 0, 255), 1, 8, 0); imshow("Video", frame); if (waitKey(30) >= 0) break; } return 0; }  2017-02-08 23:07:59 -0500 answered a question How to find the hsv range of green gloves in a particular image? Try this code. These are the results I got for your images: 2017-01-09 23:08:13 -0500 commented question Background subtraction to detect cars on a road haha oops, sorry! I meant Yasser 2017-01-09 04:50:41 -0500 commented question Background subtraction to detect cars on a road the link you've mentioned, what was the issue you were facing with that code? 2017-01-09 04:37:55 -0500 commented question Roboust Human detection and tracking in a crowded area @hoang anh tuan Well, after a lot of trying and testing, I dropped the background subtraction method for my case. I'm just tracking faces instead 2017-01-09 04:36:03 -0500 commented question Contours altering threshold image. it is working for me. can you post the screenshots of your results? 2017-01-09 03:40:15 -0500 commented question Making image noise free 2017-01-05 22:43:01 -0500 commented answer Hand detection not working. The only part of my code that returns -1 is the part that opens the camera. I was using a usb cam so used VideoCapture capture(1). It should be 0 if you're using the webcam. Maybe that's the issue? 2017-01-05 06:30:45 -0500 received badge ● Teacher (source) 2017-01-05 05:16:11 -0500 commented answer Why does the foreground image from background subtraction look transparent? I've been avoiding HOG because we have a whole bunch of processing on the detected person(age detection, gender detection, gesture detection) which are all, already computationally expensive and we need to run this code on a GPU. And also, HOG doesn't always detect people when only half their body is in frame 2017-01-05 05:01:38 -0500 commented answer Why does the foreground image from background subtraction look transparent? This is what I'm working on. With tracking into play, I think I need some other heuristic especially if the shapes of my blobs change from one frame to another and the morphological operations don't always give a single blob. Anyhoo, Thanks for your help!:) 2017-01-05 04:02:07 -0500 commented question Why does the foreground image from background subtraction look transparent? I know but this acts as another argument, a strong one, against not using background subtraction for human detection. That is, of course, if there is no solution to overcome this problem right? 2017-01-05 03:47:09 -0500 commented question Why does the foreground image from background subtraction look transparent? Ah! Can't believe I hadn't thought of that! I feel silly now! Thanks anyways! 2017-01-05 02:50:53 -0500 asked a question Why does the foreground image from background subtraction look transparent? Hello! I've been working on background subtraction techniques to detect and track humans in a scene. Because I don't want the background to change, and just subtract the frames from the first reference frame, so I've tried both the OpenCV MOG2(with learning parameter set to zero) and using the first frame and using the absdiff() to find the difference. But the foreground images I get from both techniques looks transparent as in, part of the background can be seen through the person as shown below: Results with the absdiff() technique: -Original Image: -Foreground Image: Results with the MOG2: -Original Image: -Foreground Image: And this is the background(reference) image for both the methods: Does anyone know why this happens? I need to detect and track the people for which I find the blobs and because of the transparency the blobs are pretty much detected as two blobs which messes up everything else. Thanks in advance! 2017-01-05 01:38:21 -0500 commented answer How to eliminate people's shadow or light noise in background subtraction? Yup,threshold(). A value of around 200 for the thresh parameter and 255 for the maxVal parameter would work. 2017-01-04 23:29:24 -0500 answered a question How to eliminate people's shadow or light noise in background subtraction? If you keep the shadow detection parameter as true, you'd get a gray area like this: Threshold this image to remove the gray areas and make it black. As for removing the noise, you'll have to try different morphological operations like dialation and erosion to get rid of it. This is the result for the image above: 2017-01-02 23:52:22 -0500 answered a question Hand detection not working. Try this code. It definitely works with a white background or any other color that isn't hard to separate from skin color using HSV thresholding. If you don't want it to work for only certain backgrounds then you should use some sort of skin detector first to get just the hand region and then find the contour. #include #include #include "opencv2/imgproc/imgproc.hpp" using namespace cv; using namespace std; int H_MIN = 0; int H_MAX = 255; int S_MIN = 0; int S_MAX = 255; int V_MIN = 0; int V_MAX = 255; void on_trackbar(int, void*) { } void createTrackbars() { //create window for trackbars namedWindow("Trackbars", 0); //create memory to store trackbar name on window char TrackbarName[50]; sprintf(TrackbarName, "H_MIN", H_MIN); sprintf(TrackbarName, "H_MAX", H_MAX); sprintf(TrackbarName, "S_MIN", S_MIN); sprintf(TrackbarName, "S_MAX", S_MAX); sprintf(TrackbarName, "V_MIN", V_MIN); sprintf(TrackbarName, "V_MAX", V_MAX); //create trackbars and insert them into window to change H,S,V values createTrackbar("H_MIN", "Trackbars", &H_MIN, H_MAX, on_trackbar); createTrackbar("H_MAX", "Trackbars", &H_MAX, H_MAX, on_trackbar); createTrackbar("S_MIN", "Trackbars", &S_MIN, S_MAX, on_trackbar); createTrackbar("S_MAX", "Trackbars", &S_MAX, S_MAX, on_trackbar); createTrackbar("V_MIN", "Trackbars", &V_MIN, V_MAX, on_trackbar); createTrackbar("V_MAX", "Trackbars", &V_MAX, V_MAX, on_trackbar); } int main() { Mat frame, HSV, thresh; Mat structuringElement3x3 = getStructuringElement(MORPH_RECT, Size(3, 3)); vector< vector > contours; vector hierarchy; createTrackbars(); VideoCapture capture(1); if (!capture.isOpened()) { return -1; } for (;;) { capture >> frame; imshow("Original_frame", frame); cvtColor(frame, HSV, CV_BGR2HSV); imshow("HSV_image", HSV); inRange(HSV, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), thresh); imshow("threshold_image", thresh); erode(thresh, thresh, structuringElement3x3); erode(thresh, thresh, structuringElement3x3); dilate(thresh, thresh, structuringElement3x3); dilate(thresh, thresh, structuringElement3x3); dilate(thresh, thresh, structuringElement3x3); Mat result(thresh.size(), CV_8UC3, Scalar(0.0, 0.0, 0.0)); findContours(thresh, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); int largest_area = 0; int largest_contour_index = 0; for (int i = 0; i < contours.size(); i++) { /*double area = contourArea(contours[i]); if (area > largest_area) { largest_area = area; largest_contour_index = i; }*/ drawContours(result, contours, i, Scalar(255.0, 255.0, 255.0), 1, 8); } //drawContours(result, contours, largest_contour_index, Scalar(255.0, 255.0, 255.0), 1, 8); imshow("contours_image", result); if (waitKey(30) >= 0) break; } return 0; }  I've commented out the largest contour area part because I found for lighting conditions I tested it in, the code worked better without that part. You can experiment with that and the morphological operations to see what suits your condition the best. Seen my answer here to a similar question to see the results of this code for static images. 2016-12-29 22:58:33 -0500 commented answer Problem with HSV @Nuz, check the edit in my answer 2016-12-29 01:54:11 -0500 answered a question tracking movement in a You could use background subtraction to detect the cat(blob detection) and use a tracking algorithm like Kalman filter to keep track of the detection. As for counting, I guess you could try something like what's done in this video. He's drawn a line at the center of the screen and keeps count of the number of times a blob crosses that line(in your case it would be a vertical line near the side of the screen). Actually, he uses background subtraction and a different algorithm for keeping track of the blob so you could check those out as well. 2016-12-29 01:22:31 -0500 answered a question Problem with HSV #include "opencv2/core/core.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" using namespace cv; using namespace std; int H_MIN = 0; int H_MAX = 255; int S_MIN = 0; int S_MAX = 255; int V_MIN = 0; int V_MAX = 255; void on_trackbar(int, void*) { } void createTrackbars() { //create window for trackbars namedWindow("Trackbars", 0); //create memory to store trackbar name on window char TrackbarName[50]; sprintf(TrackbarName, "H_MIN", H_MIN); sprintf(TrackbarName, "H_MAX", H_MAX); sprintf(TrackbarName, "S_MIN", S_MIN); sprintf(TrackbarName, "S_MAX", S_MAX); sprintf(TrackbarName, "V_MIN", V_MIN); sprintf(TrackbarName, "V_MAX", V_MAX); //create trackbars and insert them into window to change H,S,V values createTrackbar("H_MIN", "Trackbars", &H_MIN, H_MAX, on_trackbar); createTrackbar("H_MAX", "Trackbars", &H_MAX, H_MAX, on_trackbar); createTrackbar("S_MIN", "Trackbars", &S_MIN, S_MAX, on_trackbar); createTrackbar("S_MAX", "Trackbars", &S_MAX, S_MAX, on_trackbar); createTrackbar("V_MIN", "Trackbars", &V_MIN, V_MAX, on_trackbar); createTrackbar("V_MAX", "Trackbars", &V_MAX, V_MAX, on_trackbar); } int main() { Mat image, HSV, threshold; vector< vector > contours; vector hierarchy; createTrackbars(); image = imread("thumb.jpg"); imshow("Original_image",image); cvtColor(image, HSV, CV_BGR2HSV); imshow("HSV_image", HSV); for (;;) { inRange(HSV, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), threshold); imshow("HSV_threshold", threshold); Mat result(threshold.size(), CV_8UC3, Scalar(0.0, 0.0, 0.0)); findContours(threshold, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE); drawContours(result, contours, -1, Scalar(255.0, 255.0, 255.0), 1, 8); imshow("contours_image", result); if (waitKey(30) >= 0) break; } return(0); }  This is something like what you want, right? It has a lot to do with the original image you've chosen as well. The illumination effects make it hard to get just the skin regions properly in your image, that's why your threshold image isn't too good. edit: I guess this is the closest you'll get to what you want with your image. The two legs aren't detected separately because the original image isn't of the best quality. If this is the image you absolutely have to use then keep trying different thresholds and morphological operations and then find the contour with the largest area. 2016-12-29 00:01:19 -0500 commented question Problem with HSV you need to threshold the HSV image to get only the skin regions and then find the contours