Ask Your Question

Yan's profile - activity

2017-07-28 06:21:28 -0600 received badge  Famous Question (source)
2016-05-27 15:29:43 -0600 received badge  Nice Question (source)
2016-04-26 15:04:44 -0600 received badge  Notable Question (source)
2015-10-22 03:20:57 -0600 received badge  Popular Question (source)
2014-11-12 05:55:07 -0600 asked a question Creating Python interface for new openCV code (gPb image segmentation)

Last year someone wrote code that performs the Berkley gPb image segmentation algorithm in OpenCV (https://github.com/HiDiYANG/gPb-GSoC). As far as I know it hasn't made it into the OpenCV main codebase yet, but I'd like to use it in my OpenCV python code.

  1. I've read http://docs.opencv.org/trunk/doc/py_tutorials/py_bindings/py_bindings_basics/py_bindings_basics.html but I can't work out if I need to alter CMakeLists.txt in opencv-2.4.8.2/modules/python/, if I simply run opencv-2.4.8.2/modules/python/src2/gen2.py on my modified c++ header files (which seems to do nothing), or quite what. Is there a list somewhere of steps to follow to produce openCV python bindings for new C++ code?
  2. Are there plans to implement the gPb algorithm in upcoming releases? There seem to be no followups to http://code.opencv.org/issues/2728, even though it seems like quite a lot of the work has been done by Di Yang in the github repository above.
2014-09-18 03:40:56 -0600 asked a question Locate cropped area: feature detection with scale but not rotation invariance

I have a picture and a square crop taken from it (examples below). The crop has been uniformly scaled to a constant size (130x130 px). I wish to locate the area of the crop within the original picture. I can't use the opencv matchTemplate() function, as the cropped area has been scaled.

I've got something working using SIFT to locate and match points between the two images, but this fails in some cases when it can't find enough feature points, such as the examples below.

This seems like it should be an easier problem than SURF/SIFT/BRISK etc normally deal with, since the search object hasn't been rotated, the scaling is uniform in all directions, and there are no obscured areas of the source or target images. Also this don't have to happen in real-time, so I am not worried about speed.

Are there any feature detectors (or algorithms) that would work better than SIFT for this use case? I thought perhaps there are some detectors which are (uniform) scale invariant but not rotation invariant?

original square crop original crop

Thanks for any pointers.

Yan

2014-04-02 11:40:56 -0600 asked a question Pixel inaccuracies causing problems for HoughLinesP

How can I use HoughLinesP (or some equivalent) to detect the very obvious long, almost-horizontal, broken line running from the far right to the far left of the edge-detected image below?

image description

If I try something like

cv2.HoughLinesP(edge, rho = 1, theta = np.pi/2, threshold = 400, minLineLength = 600, maxLineGap = 100)

then I get nothing. If I make the minLineLength shorter, then I get 2 parallel lines to the left and right.

2014-03-25 08:06:26 -0600 received badge  Nice Answer (source)
2014-03-24 12:27:14 -0600 received badge  Teacher (source)
2014-03-24 05:30:00 -0600 received badge  Self-Learner (source)
2014-03-24 04:57:41 -0600 answered a question Detect and remove borders from framed photographs

In case anyone need to do anything similar, I ended up taking a different approach, using iterative flood filling from the edges and detecting the lines of the resulting mask, to be able to deal with images like this:

Doubly frames photo

My rough python code for this is

from __future__ import division
import cv2
import numpy as np

def crop_border(src, edge_fraction=0.25, min_edge_pix_frac=0.7, max_gap_frac=0.025, max_grad = 1/40):
    '''Detect if picture is in a frame, by iterative flood filling from each edge, 
    then using HoughLinesP to identify long horizontal or vertical lines in the resulting mask.
    We only choose lines that lie within a certain fraction (e.g. 25%) of the edge of the picture,
    Lines need to be composed of a certain (usually large, e.g. 70%) fraction of edge pixels, and
    can only have small gaps (e.g. 2.5% of the height or width of the image).
    Horizontal lines are defined as -max_grad < GRAD < max_grad, vertical lines as -max_grad < 1/GRAD < max_grad
    We only crop the frame if we have detected left, right, top AND bottom lines.'''

    kern = cv2.getStructuringElement(cv2.MORPH_RECT,(2,2))
    sides = {'left':0, 'top':1, 'right':2, 'bottom':3}     # rectangles are described by corners [x1, y1, x2, y2]
    src_rect = np.array([0, 0, src.shape[1], src.shape[0]])
    crop_rect= np.array([0, 0, -1, -1])  #coords for image crop: assume right & bottom always negative
    axis2coords = {'vertical': np.array([True, False, True, False]), 'horizontal': np.array([False, True, False, True])}
    axis_type = {'left': 'vertical',   'right':  'vertical',
                 'top':  'horizontal', 'bottom': 'horizontal'}
    flood_points = {'left': [0,0.5], 'right':[1,0.5],'top': [0.5, 0],'bottom': [0.5, 1]} #Starting points for the floodfill for each side
    #given a crop rectangle, provide slice coords for the full image, cut down to the right size depending on the fill edge
    width_lims =  {'left':   lambda crop, x_max: (crop[0], crop[0]+x_max),
                   'right':  lambda crop, x_max: (crop[2]-x_max, crop[2]),
                   'top':    lambda crop, x_max: (crop[0], crop[2]),
                   'bottom': lambda crop, x_max: (crop[0], crop[2])}
    height_lims = {'left':   lambda crop, y_max: (crop[1], crop[3]),
                   'right':  lambda crop, y_max: (crop[1], crop[3]),
                   'top':    lambda crop, y_max: (crop[1], crop[1]+y_max),
                   'bottom': lambda crop, y_max: (crop[3]-y_max,crop[3])}

    cropped = True
    while(cropped):
        cropped = False
        for crop in [{'top':0,'bottom':0},{'left':0,'right':0}]:
            for side in crop: #check both edges before cropping
                x_border_max = int(edge_fraction * (src_rect[2]-src_rect[0] + crop_rect[2]-crop_rect[0]))
                y_border_max = int(edge_fraction * (src_rect[3]-src_rect[1] + crop_rect[3]-crop_rect[1]))
                x_lim = width_lims[side](crop_rect,x_border_max)
                y_lim = height_lims[side](crop_rect,y_border_max)
                flood_region = src[slice(*y_lim), slice(*x_lim), ...]
                h, w = flood_region.shape[:2]
                region_rect = np.array([0,0,w,h])
                flood_point = np.rint((region_rect[2:4] - 1) * flood_points[side]).astype(np.uint32)
                target_axes = axis2coords[axis_type[side]]
                long_dim = np.diff(region_rect[~target_axes])
                minLineLen = int((1.0 - edge_fraction * 2) * long_dim)
                maxLineGap = int(max_gap_frac * long_dim)
                thresh = int(minLineLen * min_edge_pix_frac)

                for flood_param in range(20):
                    mask = np.zeros((h+2,w+2 ...
(more)
2014-03-19 06:08:20 -0600 commented answer Detect and remove borders from framed photographs

Thanks, that's a very nice approach. The only issue is that it may find borders where there are none, for example, in the picture below. I guess the way to get around that is to reject maximum peak values below a certain level. I could also restrict the search to a border of (say) 10% around each edge.

Finally, I guess it might be a good idea to do this on the 3 colour channels separately. Presumably there's no need to add the vertical and horizontal Sobel results together either: I could just look for horizontal lines using Sobel(...,0,2, ksize) and vertical in Sobel(...,2,0,ksize) problem image

2014-03-18 02:09:33 -0600 received badge  Student (source)
2014-03-17 08:29:51 -0600 asked a question Detect and remove borders from framed photographs

Any ideas how to detect, and therefore remove (approximately) rectangular borders or frames around images? Due to shading effects etc, the borders may not be of uniform colour, and may include or be partially interrupted by text (see examples below)? I've tried and failed on some of the examples below when thresholding on intensity and looking at contours, or trying to detect edges using the Canny detector. I also can't guarantee that the images will actually have borders in the first place (in which case nothing needs removing).

White border, coloured image Black irregular border White border, white image

2014-03-12 04:24:11 -0600 commented answer Identifying dominant (background) colour in still images using mean-shift

Thanks. It's the speckled background that really causes the problems. I've ended up doing 3 rounds of bilateral filtering then a mean shift (using pyrMeanShiftFiltering). To get around the problem that the corners may not represent the true background, I've then taken a mean over the largest flood-filled area from a selection of starting points inside the image.

Instead of flood fill, I guess once pyrMeanShiftFiltering has segmented the image, I could pick the largest area of a uniform colour, but I don't know how to do that.

2014-03-12 04:17:56 -0600 received badge  Scholar (source)
2014-03-12 04:17:51 -0600 received badge  Supporter (source)
2014-03-10 16:34:31 -0600 asked a question Identifying dominant (background) colour in still images using mean-shift

Are there any functions in openCV which perform the mean-shift algorithm in colour space only? It seems like the meanShift() function is aimed only at motion tracking. I want to ignore spatial information and simply find the dominant (modal) colour(s).

The context is that I'm trying to identify the background in images of pinned butterfly specimens. I don't want to use k-means clustering, as the other colours in the images can be very variable (see examples below). The backgrounds can vary in brightness across the image, but are of (approximately) uniform hue, or are sometimes speckled (which I get rid of by using cv2.bilateralFilter()). However, I can't use a single hue channel, because the background is often white, grey, or dark, and jpeg compression can lead to rather variable hue values even within a similar looking background. Instead, I'm thinking of looking for the dominant region in c1c2c3 colour space.

Once I have at least part of the background, The idea is to mask out the rest using GrabCut(), identify the contours in the mask, and perform logistic regression on the Hu moments to identify which masked area is most butterfly-like.

Thanks

Yan

Blue background large butterfly image descriptionimage description

2014-02-24 13:10:58 -0600 received badge  Editor (source)
2014-02-24 13:10:39 -0600 answered a question RGB to c1c2c3 color space conversion

I don't use Java, but c1c2c3 conversion is a simple 2 liner in python openCV. This is what I use for a 24 bit image stored in img obtained, for example, by img = cv2.imread("image.jpg")

im = img.astype(np.float32)+0.001 #to avoid division by 0
c1c2c3 = np.arctan(im/np.dstack((cv2.max(im[...,1], im[...,2]), cv2.max(im[...,0], im[...,2]), cv2.max(im[...,0], im[...,1]))))