2017-07-28 06:21:28 -0600 | received badge | ● Famous Question (source) |
2016-05-27 15:29:43 -0600 | received badge | ● Nice Question (source) |
2016-04-26 15:04:44 -0600 | received badge | ● Notable Question (source) |
2015-10-22 03:20:57 -0600 | received badge | ● Popular Question (source) |
2014-11-12 05:55:07 -0600 | asked a question | Creating Python interface for new openCV code (gPb image segmentation) Last year someone wrote code that performs the Berkley gPb image segmentation algorithm in OpenCV (https://github.com/HiDiYANG/gPb-GSoC). As far as I know it hasn't made it into the OpenCV main codebase yet, but I'd like to use it in my OpenCV python code.
|
2014-09-18 03:40:56 -0600 | asked a question | Locate cropped area: feature detection with scale but not rotation invariance I have a picture and a square crop taken from it (examples below). The crop has been uniformly scaled to a constant size (130x130 px). I wish to locate the area of the crop within the original picture. I can't use the opencv matchTemplate() function, as the cropped area has been scaled. I've got something working using SIFT to locate and match points between the two images, but this fails in some cases when it can't find enough feature points, such as the examples below. This seems like it should be an easier problem than SURF/SIFT/BRISK etc normally deal with, since the search object hasn't been rotated, the scaling is uniform in all directions, and there are no obscured areas of the source or target images. Also this don't have to happen in real-time, so I am not worried about speed. Are there any feature detectors (or algorithms) that would work better than SIFT for this use case? I thought perhaps there are some detectors which are (uniform) scale invariant but not rotation invariant?
Thanks for any pointers. Yan |
2014-04-02 11:40:56 -0600 | asked a question | Pixel inaccuracies causing problems for HoughLinesP How can I use HoughLinesP (or some equivalent) to detect the very obvious long, almost-horizontal, broken line running from the far right to the far left of the edge-detected image below? If I try something like
then I get nothing. If I make the minLineLength shorter, then I get 2 parallel lines to the left and right. |
2014-03-25 08:06:26 -0600 | received badge | ● Nice Answer (source) |
2014-03-24 12:27:14 -0600 | received badge | ● Teacher (source) |
2014-03-24 05:30:00 -0600 | received badge | ● Self-Learner (source) |
2014-03-24 04:57:41 -0600 | answered a question | Detect and remove borders from framed photographs In case anyone need to do anything similar, I ended up taking a different approach, using iterative flood filling from the edges and detecting the lines of the resulting mask, to be able to deal with images like this: My rough python code for this is (more) |
2014-03-19 06:08:20 -0600 | commented answer | Detect and remove borders from framed photographs Thanks, that's a very nice approach. The only issue is that it may find borders where there are none, for example, in the picture below. I guess the way to get around that is to reject maximum peak values below a certain level. I could also restrict the search to a border of (say) 10% around each edge. Finally, I guess it might be a good idea to do this on the 3 colour channels separately. Presumably there's no need to add the vertical and horizontal Sobel results together either: I could just look for horizontal lines using Sobel(...,0,2, ksize) and vertical in Sobel(...,2,0,ksize) |
2014-03-18 02:09:33 -0600 | received badge | ● Student (source) |
2014-03-17 08:29:51 -0600 | asked a question | Detect and remove borders from framed photographs Any ideas how to detect, and therefore remove (approximately) rectangular borders or frames around images? Due to shading effects etc, the borders may not be of uniform colour, and may include or be partially interrupted by text (see examples below)? I've tried and failed on some of the examples below when thresholding on intensity and looking at contours, or trying to detect edges using the Canny detector. I also can't guarantee that the images will actually have borders in the first place (in which case nothing needs removing).
|
2014-03-12 04:24:11 -0600 | commented answer | Identifying dominant (background) colour in still images using mean-shift Thanks. It's the speckled background that really causes the problems. I've ended up doing 3 rounds of bilateral filtering then a mean shift (using pyrMeanShiftFiltering). To get around the problem that the corners may not represent the true background, I've then taken a mean over the largest flood-filled area from a selection of starting points inside the image. Instead of flood fill, I guess once pyrMeanShiftFiltering has segmented the image, I could pick the largest area of a uniform colour, but I don't know how to do that. |
2014-03-12 04:17:56 -0600 | received badge | ● Scholar (source) |
2014-03-12 04:17:51 -0600 | received badge | ● Supporter (source) |
2014-03-10 16:34:31 -0600 | asked a question | Identifying dominant (background) colour in still images using mean-shift Are there any functions in openCV which perform the mean-shift algorithm in colour space only? It seems like the meanShift() function is aimed only at motion tracking. I want to ignore spatial information and simply find the dominant (modal) colour(s). The context is that I'm trying to identify the background in images of pinned butterfly specimens. I don't want to use k-means clustering, as the other colours in the images can be very variable (see examples below). The backgrounds can vary in brightness across the image, but are of (approximately) uniform hue, or are sometimes speckled (which I get rid of by using cv2.bilateralFilter()). However, I can't use a single hue channel, because the background is often white, grey, or dark, and jpeg compression can lead to rather variable hue values even within a similar looking background. Instead, I'm thinking of looking for the dominant region in c1c2c3 colour space. Once I have at least part of the background, The idea is to mask out the rest using GrabCut(), identify the contours in the mask, and perform logistic regression on the Hu moments to identify which masked area is most butterfly-like. Thanks Yan
|
2014-02-24 13:10:58 -0600 | received badge | ● Editor (source) |
2014-02-24 13:10:39 -0600 | answered a question | RGB to c1c2c3 color space conversion I don't use Java, but c1c2c3 conversion is a simple 2 liner in python openCV. This is what I use for a 24 bit image stored in img obtained, for example, by img = cv2.imread("image.jpg") |