2020-08-30 04:40:55 -0600 | received badge | ● Notable Question (source) |
2018-11-15 09:03:03 -0600 | received badge | ● Notable Question (source) |
2018-10-31 07:43:15 -0600 | received badge | ● Popular Question (source) |
2016-12-13 17:14:36 -0600 | received badge | ● Popular Question (source) |
2016-02-10 04:13:05 -0600 | commented answer | What is the best solution for rotation invariant detector? Hi Lizris, I know this is kind of an old question. Did you find a workable answer to this?
|
2015-06-24 04:14:18 -0600 | commented question | What method to use to get a fitted curve Some of the points on the curve are very close and a few of them are far apart. So, On certain frames, this would work. But, in many frames, the curve will not be smoother since I will have to reject a lot of points on between right. What seems to be a simple thing for the human brain to take a pencil and smooth out the curves , seems tough to implement. :), atleast for me |
2015-06-23 10:40:51 -0600 | commented question | What method to use to get a fitted curve Hi, Thanks for your inputs, as always. I am applying a color filter to get the PB. Then removing noise by deleting areas with small area. I have not used convexHull though. I will try and see if I can get something different with that. Then I have already used the technique with the angle between points. Though a slightly different approach. I check if the angle between points are not above and below a threshold and that the slope of the line is continuous. I have not used the convexHull to see the difference it gets me. It is tough to get a background extractions as the stuff on the PB changes with moving camera. I THINK ( i could be way out though) all I need is a neat method to fill in the spikes and make it a smooth curve. Thats the implementation that I am looking for. Thanks |
2015-06-23 05:05:04 -0600 | commented question | What method to use to get a fitted curve I can extract the bottom border as well. But, thats not straight either - with the players and other obstacles in between. So, My goal was to run through this step twice - once for the top and another for the bottom. Do you think I should follow a different approach? |
2015-06-23 04:26:19 -0600 | commented question | What method to use to get a fitted curve |
2015-06-23 02:02:24 -0600 | asked a question | What method to use to get a fitted curve Hi, I am looking at a video of a, say Ice Hockey game. My goal is to detect the external boundary of the perimter board. But, of course there are players blocking the view. I was able to set some threshold and am able to generate contours of the visibile perimter board. I am able to detect the contour reasonably well. But, the contours are not smooth. When I draw the contours, the edges are jagged. What method can I use to approximate and draw a smooth fitted curv e? The curve need not pass through all the points , per se. I have tried the approxpolydp function.but that is not providing a good output as per my requirement. |
2015-02-10 01:25:09 -0600 | received badge | ● Enthusiast |
2015-02-06 04:20:01 -0600 | asked a question | I am confused on what could be a simple thing Folks, the last couple days, I have been working with the images, detection and contours. So, that is caused me to be confused. So, a pointer or correcting me would be of great help. I have two images: 1) Say, a logo 2) Contours of interest in another Matrix. Now, I want the resultant image such that the contour is overlayed on the logo. Essentially, in all the pixels that the contour has a value, the resultant image should have the value of the contour and not that of the logo. Help Please!!!!!!! Karthik |
2015-02-03 12:38:19 -0600 | commented question | Method to detect the perimeter board in an Ice Hockey game image Hi thdrksdfthmn, Thanks for the tip. But would that not just get me the yellow region? How would i get to the end result? Karthik |
2015-02-03 07:31:44 -0600 | asked a question | Method to detect the perimeter board in an Ice Hockey game image Hi all, I need to be able to detect the boundary of the ad-board in an ice hockey arena. The Perimeter board always has a yellow line on the bottom. Of course, there is a lot of noise due to the players, the hockey stick, goal post and the referee and the brands on the display. The step by step approach I followed is to : 1) using HSV image, apply threshold to mark the yellow region of the image. Challenge: due to the players, the contour is not always continuous.. How to merge the contours? 2) Apply another threshold on the original image whereby I am able to segregate the spectators predominantly ( from the top of the perimeter board and above). Challenge: I still am left with a lot of players in the arena. Please see point 4 on my approach for this. 3) Combine the filtered image from step 1 and 2. 4) Use the HOG-Human detection to mark the players ( In the hope that I can draw contours for each of the player detected, mask them, remove them from the original picture - This part is yet to be done). I will upload my initial image and the output of my effort so far.. Is there a more intelligent approach to solve this? Original Image With Yellow Threshold With Threshold for top edge of the Permeter board What I actually want finally (Tall ask!??) |
2014-12-01 03:59:34 -0600 | received badge | ● Supporter (source) |
2014-11-19 02:27:42 -0600 | commented answer | Overlay in video after detection Hi petititi, Thanks for your suggestion. I have not used KalmanFilter. I will try it out and will let you know . Thanks |
2014-11-19 02:24:46 -0600 | commented question | Overlay in video after detection Hi Steve, Thanks for your suggestion. After posting this questions, I started buffering my video for x-frames (say amounting to 10 secs , I then find the average of the ROI for the x frames and then overlay in that. This reduces the jitteriness for that x frames.. but, then I start buffering from x+1 to 2x frames and overlay, the transition from the first to the second clip shows an obvious JUMP as the ROI of the (x+1)th frame is higher than the average of the first x frames. So, trying to see how better to smooth that out.. In the source video, the camera zooms in on my ROI. So, the matching rectangle size keeps increasing.In general, using opencv, if I overlay an image (that zooms/grows every frame), this behavior is expected? Would using OPENGL to render the overlay image be any better? |
2014-11-19 02:09:50 -0600 | received badge | ● Scholar (source) |
2014-10-21 03:34:13 -0600 | asked a question | Overlay in video after detection Hi, I am using cascade classifier (LBP) to identify a logo in a video. The video has focus on a brand in different zoom levels. I want to be replace this logo with a different image. Depending on the success rate of the logo match, I am able to achieve this by copying the image to the matched region as detected by the cascade classifier. The problem is that since I am replacing the ROI with my image on every frame, the modified video has a very jittery look (Since the matches has slightly varying bounding box). Is there any standard approach to address this? I am sure there are hundreds out there who have faced this issue. So, trying to get some ideas here. Thanks |
2014-09-02 06:00:08 -0600 | commented answer | Need tips for a localized cascade detection Thanks for your response, thdrksdfthmn. I will try this out and will let you know. |
2014-09-02 02:32:34 -0600 | commented answer | Need tips for a localized cascade detection Thanks for your response, thdrksdfthmn. I understand that I need to feed in my ROI as the input to the detectmultiscale function. My concern is how would I programmatically identify the region that lies above the thresholded sequence of near-white cells. Please see the image attached in dropbox https://www.dropbox.com/s/qi5ldvgr8v4wela/usefordilation-2.jpg?dl=0 I have manually drawn a red line in the image. My ROI is the space above the red line ( a lot of consecutive white spaces). How would I programmatically acheive this? Should I check the average pixel value of each row and check if it is close to 255 ? Or is there a better more intelligent way? |
2014-09-01 06:53:17 -0600 | asked a question | Need tips for a localized cascade detection Hi I use cascade classifier to detect patterns of interest in a video. I need to limit my detection only in certain areas of the image. The co-ordinates change based on from which view the photo was taken. nevertheless, I am able to apply a few filters / erosion and am able to arrive at a pattern where I get a thick White (close too pixel value 255) patch across the image that segregate my region of interest and the region I am not interested in. How should I programmatically choose the region "above" this white patch and not consider the rest of the image? Any ideas? A more practical example: Imagine looking at a picture taken in a football/soccor stadium. The image has a lot of spectators, a view of the playground and the sky above. So, I apply certain filters etc.. to mark the area of the spectators as a white region. But, I need my cascade classifiers to detect only the region above this patch (sky region) and ignore any patterns below this. Any ideas? Please let me know I need to provide any additional information to get help. |
2014-08-25 05:48:41 -0600 | commented answer | Weird results from cascade training Dear cmyr, How did your results fare? Can you please share your lessons learnt? ( I know this is about a year old). But, whatever you can recollect, it might be helpful for me. Thanks samjakar |
2014-08-20 06:13:27 -0600 | commented question | Train cascade detecting image in various angles Anybody out there that can answer me on this!!? |
2014-08-18 02:06:55 -0600 | commented question | Train cascade detecting image in various angles Can any Cascade training experts offer their suggestion, please? If you'd like for the question to be broken down, please let me know. Thanks |
2014-08-13 05:03:43 -0600 | asked a question | Train cascade detecting image in various angles Using train_cascade - LBP, I am trying to detect a brand name in a outdoor event - captured via video. The brand Name can be in the following orientations. 1) Straight in an upright manner on a vertical board. 2) Can be on the painted on the ground. So, when an upright camera focuses on it, the angle of view is slightly different and the Brandname could appear to be larger and a little stretchy. 3) Can be on the ground With the logo/Brand Name rotated by say, 70 - 80 degrees. Say, if the Brand Name is "MYBRANDNAME". Imagine this on the ground with M starting at top left corner and ending with E in the middle of the screen (kind of orientation). Another variation of this could be starting from the center of the screen to the top right corner. 4) Also, when this Brand name is shot from a chopper or from the top of a high rise, there could be numerous zoom possibilities. Above are the scenarios. Questions are as below: 1) I am able to handle #1 in most cases. So, no issues there. 2) Will the training for #1 be able to detect the other ones in #2? 3) I assume cascade training is not Rot-scale invariant. So, The same cascade may not work for #3 and #4. So, what is the best way to approach this training? How many samples do you think I might need to get a good cascade. 4) Finally, If I extract images from a video of this event, should I use consecutive frames as positives? They all will have a slowly changing size/angle and for a human eye, the first and last frame could be different, but the in between frames will be almost similar. So, is it worthwhile to use all those frames? Or since the consecutive frames are very similar, it is best to disregard some of them. Thanks in advance for your responses. samjakar |
2014-07-15 01:58:20 -0600 | commented answer | template matching angle and scale invariant Hi grobian, Did you have any luck with this? Please let me know when you get a chance. |
2014-06-09 04:51:11 -0600 | commented answer | Tiff images, DPI and compression Hi EK, I know I am very late in responding to the question. Let me know if you still needed some response here or if you have found out what you wanted to know. A lot of developments at work and life has been hectic. :). I apologize. Samjakar |
2013-10-28 04:39:09 -0600 | answered a question | Very simple TrainCascade not working Hi Jean, This could very well be resolved now. But, am responding since I was looking for some answers for my questions :). Is there a typo in the question or is it just bad command. opencv_traincascade -data resultat -vec pos.vec -featureType LBP -bg negative.txt -numPos 1 -numNeg 1519 The numPos has just "1". I think that could be your problem, unless that was a typo when you posted on this forum. Karthik |
2013-10-28 02:53:40 -0600 | received badge | ● Editor (source) |
2013-10-28 01:59:27 -0600 | commented answer | Minimize False Positives using opencv_traincascade Steve, Very Insightful. 1) The orientation I am interested is only about 60-70degree rotation with camera taking a shot from different positions. Not a fill 180Degree turn. 2) You have mentioned that I should use "you use samples that have actual a meaning.". Say for example, I am looking for billboards on the street, a valid bg image will be roads, buildings, sky etc but, not having the logo of interest, is it? 3)"Using a single image and transform it to find the logo, only works in very controlled testing situations". Can I use this method if I know the background of where I'd be detecting, like say detecting brands on billboards/Hoardings on the streets? Is it possible for me to combine the results of create samples with one image and using 1000 other images? Thanks Karthik |
2013-10-28 01:47:20 -0600 | commented answer | Minimize False Positives using opencv_traincascade Hi Steve, I had not subscribed to this question. So, I had not noticed you'd answered. I'll review your comments and will revert back. Thanks Karthik |
2013-10-28 01:43:24 -0600 | asked a question | OpenCV createsamples dependencies Hi, I have been going through various forums and have not come across a definitive solution for this question. I am in the process of recognizing static text / Images in various backgrounds. Since the only kind of variation that this will take is the rotation and scaling and blurriness. So, It is not as complex as detecting a face or such generic item. But, here comes my Dilemma. 1) HOW TO CREATE SAMPLES (Partly already answered by Steve)? When training for such images, Should I use one good image and use createsample to help me create multiple images ( with auto distortions and scalings)? Or should I physically take hundreds of such images and create training on them. If I Choose those images manually, then Should I also choose images of various scales manually as create samples will not perform auto distortion if I choose the "-i" parameter. Part of this question is already answered by Steve for another question of mine. The only part I am not clear is if I use images with SOME rotations (say 1 - 15degree angles) and create an XML with this data and when I analyze my output images, will the process be able to detect images that has been rotated by say 40degrees? 2) DEPENDENCY ON THE QUALITY OF THE IMAGE USED FOR TRAINING So far I have been using large images with the image of my interest in it. I then mark the ROI on all the image files into a text file (using object_marker) and then use it in my create samples. I create about 200 such positive images using the command opencv_createsamples -info walker.txt -vec walkerw50h15_1.vec -w 50 -h 15 -num 230 I then use the VEC file for my ttraining and such. My question is, Would it matter if I choose an image from a HD sample video or just any BMP file would do? Does that really matter at all? 3) DEPENDENCY ON THE QUALITY OF THE INPUT VIDEO Say, I train my machine using train cascade by using my own sample images (without applying any distortions by opencv), Will the resultant XML file be good for ANY quality of the video? - HD (16:9) or SD (4:3)? Is there a dependency at all? The reason I ask these questions is that I have trained on numerous images and the analysis varies from video to video. I am not getting a consistency. So, am trying to understand the variables. |
2013-10-17 04:42:36 -0600 | asked a question | Minimize False Positives using opencv_traincascade I am using opencv to detect a brand logo in different possible orientations. 1) Since the logo is a constant image, I used opencv_createsamples with a single logo image. I had used the default value for the -num parameter ( which amounts to 1000). My syntax is as below: opencv_createsamples -img ../brandname.jpg -vec brandnamew100h30.vec -w 100 -h 30 I created a VEC file with parameter of width 100 and height 30. The actual image dimensions are bigger than the size used here. Note that I am NOT using the background images for the sample creation. The sample creation will just contain the brands and not the background. 2) I am then using the opencv_traincascade to train this VEC file. opencv_traincascade -data "brandxml" -vec brandnamew100h30.vec -bg ../tobmp/neglist.txt -numPos 600 -numNeg 300 -w 100 -h 30 -featuretype LBP The bg file here is just random background images and not relevant to where the brand might appear (in the test video below) When I try to match images/videos using the resultant xml, I am getting a lot of false positives. Any suggestions on how to decrease the false positives? Karthik |
2013-08-06 06:45:20 -0600 | received badge | ● Student (source) |
2013-08-06 06:40:37 -0600 | asked a question | Tiff images, DPI and compression Hi, I am working on a combination of OpenCV2.4.0, and leptonica. We are processing tiff images. At the end of the image processing, I am noticing that the DPI of the resultant image ( created using "imwrite") has come down from 150 to 96 and I also notice that the image has been compressed by LZW method thought the original image was uncompressed. Also, the Bit Depth has come me down from to 8 from 32 (as was on the original image). Are there ways to fix these three?
|
2013-07-01 07:05:11 -0600 | asked a question | Video Analyzing on a Mutliprocessor server/Rack Hi, For a very long time, I have been in a world far away from systems programming that had just databases and related applications. I am now getting into image processing and systems programming with my team's help. Pardon my ignorance if my question below is trivial. I am in need of analyzing near live videos on a Server (for faster analysis). I know openCV supports multicore processors. But, if i have multiple multicore processors, will openCV still support it? How would I go about it. Any response is greatly appreciated. |