Ask Your Question
4

Detect and remove borders from framed photographs

asked 2014-03-17 08:29:51 -0600

Yan gravatar image

updated 2020-11-30 03:28:02 -0600

Any ideas how to detect, and therefore remove (approximately) rectangular borders or frames around images? Due to shading effects etc, the borders may not be of uniform colour, and may include or be partially interrupted by text (see examples below)? I've tried and failed on some of the examples below when thresholding on intensity and looking at contours, or trying to detect edges using the Canny detector. I also can't guarantee that the images will actually have borders in the first place (in which case nothing needs removing).

White border, coloured image Black irregular border White border, white image

edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
11

answered 2014-03-18 02:07:36 -0600

updated 2014-03-18 02:08:55 -0600

1- compute Laplacian of image .

2 - compute horizontal & vertical projection

3 - evaluation of changes in both directions.

4 - finx the maximum peak Find in the side of the gradient image.

#include <cv.h>
#include <highgui.h>

using namespace cv;
Rect deleteBorder(InputArray _src,int size){
    Mat src = _src.getMat();
    Mat sbl_x, sbl_y;
    int ksize = 2 * size + 1;
    Sobel(src, sbl_x, CV_32FC1, 2, 0, ksize);
    Sobel(src, sbl_y, CV_32FC1, 0, 2, ksize);
    Mat sum_img = sbl_x + sbl_y;

    Mat gray;
    normalize(sum_img, gray, 0, 255, CV_MINMAX, CV_8UC1);

    Mat row_proj, col_proj;
    reduce(gray, row_proj, 1, CV_REDUCE_AVG, CV_8UC1);
    reduce(gray, col_proj, 0, CV_REDUCE_AVG, CV_8UC1);
    Sobel(row_proj, row_proj, CV_8UC1, 0, 2, 3);
    Sobel(col_proj, col_proj, CV_8UC1, 2, 0, 3);


    Point peak_pos;
    int half_pos = row_proj.total() / 2;
    Rect result;

    minMaxLoc(row_proj(Range(0,half_pos), Range(0, 1)), 0, 0, 0, &peak_pos);
    result.y = peak_pos.y;
    minMaxLoc(row_proj(Range(half_pos, row_proj.total()), Range(0, 1)), 0, 0, 0, &peak_pos);
    result.height = (peak_pos.y + half_pos - result.y);

    half_pos = col_proj.total() / 2;
    minMaxLoc(col_proj(Range(0, 1), Range(0, half_pos)), 0, 0, 0, &peak_pos);
    result.x = peak_pos.x;
    minMaxLoc(col_proj(Range(0, 1), Range(half_pos, col_proj.total())), 0, 0, 0, &peak_pos);
    result.width = (peak_pos.x + half_pos - result.x);

    return result;

}
int _tmain(int argc, _TCHAR* argv[])
{
Mat img = imread("d:/12.jpg", 1);
Mat gray_img;
cvtColor(img, gray_img, CV_BGR2GRAY);
Rect r = deleteBorder(gray_img, 2);
Mat color_img;

rectangle(img, r, CV_RGB(0, 255, 0), 2);
imshow("result", img);
waitKey(0);

return 0;
}

image description image description image description

edit flag offensive delete link more

Comments

Thanks, that's a very nice approach. The only issue is that it may find borders where there are none, for example, in the picture below. I guess the way to get around that is to reject maximum peak values below a certain level. I could also restrict the search to a border of (say) 10% around each edge.

Finally, I guess it might be a good idea to do this on the 3 colour channels separately. Presumably there's no need to add the vertical and horizontal Sobel results together either: I could just look for horizontal lines using Sobel(...,0,2, ksize) and vertical in Sobel(...,2,0,ksize) problem image

Yan gravatar imageYan ( 2014-03-19 06:08:20 -0600 )edit
1

@Mostafa Sataki Thanks for cool answer!

Daniil Osokin gravatar imageDaniil Osokin ( 2014-03-24 05:31:07 -0600 )edit

Hi, what can we do if the image is of a similar color as the border?? Thank you. Then we can't use a gradient based method?

kaiwen gravatar imagekaiwen ( 2016-07-06 05:12:17 -0600 )edit
3

answered 2014-03-24 04:57:41 -0600

Yan gravatar image

updated 2014-03-24 04:58:45 -0600

In case anyone need to do anything similar, I ended up taking a different approach, using iterative flood filling from the edges and detecting the lines of the resulting mask, to be able to deal with images like this:

Doubly frames photo

My rough python code for this is

from __future__ import division
import cv2
import numpy as np

def crop_border(src, edge_fraction=0.25, min_edge_pix_frac=0.7, max_gap_frac=0.025, max_grad = 1/40):
    '''Detect if picture is in a frame, by iterative flood filling from each edge, 
    then using HoughLinesP to identify long horizontal or vertical lines in the resulting mask.
    We only choose lines that lie within a certain fraction (e.g. 25%) of the edge of the picture,
    Lines need to be composed of a certain (usually large, e.g. 70%) fraction of edge pixels, and
    can only have small gaps (e.g. 2.5% of the height or width of the image).
    Horizontal lines are defined as -max_grad < GRAD < max_grad, vertical lines as -max_grad < 1/GRAD < max_grad
    We only crop the frame if we have detected left, right, top AND bottom lines.'''

    kern = cv2.getStructuringElement(cv2.MORPH_RECT,(2,2))
    sides = {'left':0, 'top':1, 'right':2, 'bottom':3}     # rectangles are described by corners [x1, y1, x2, y2]
    src_rect = np.array([0, 0, src.shape[1], src.shape[0]])
    crop_rect= np.array([0, 0, -1, -1])  #coords for image crop: assume right & bottom always negative
    axis2coords = {'vertical': np.array([True, False, True, False]), 'horizontal': np.array([False, True, False, True])}
    axis_type = {'left': 'vertical',   'right':  'vertical',
                 'top':  'horizontal', 'bottom': 'horizontal'}
    flood_points = {'left': [0,0.5], 'right':[1,0.5],'top': [0.5, 0],'bottom': [0.5, 1]} #Starting points for the floodfill for each side
    #given a crop rectangle, provide slice coords for the full image, cut down to the right size depending on the fill edge
    width_lims =  {'left':   lambda crop, x_max: (crop[0], crop[0]+x_max),
                   'right':  lambda crop, x_max: (crop[2]-x_max, crop[2]),
                   'top':    lambda crop, x_max: (crop[0], crop[2]),
                   'bottom': lambda crop, x_max: (crop[0], crop[2])}
    height_lims = {'left':   lambda crop, y_max: (crop[1], crop[3]),
                   'right':  lambda crop, y_max: (crop[1], crop[3]),
                   'top':    lambda crop, y_max: (crop[1], crop[1]+y_max),
                   'bottom': lambda crop, y_max: (crop[3]-y_max,crop[3])}

    cropped = True
    while(cropped):
        cropped = False
        for crop in [{'top':0,'bottom':0},{'left':0,'right':0}]:
            for side in crop: #check both edges before cropping
                x_border_max = int(edge_fraction * (src_rect[2]-src_rect[0] + crop_rect[2]-crop_rect[0]))
                y_border_max = int(edge_fraction * (src_rect[3]-src_rect[1] + crop_rect[3]-crop_rect[1]))
                x_lim = width_lims[side](crop_rect,x_border_max)
                y_lim = height_lims[side](crop_rect,y_border_max)
                flood_region = src[slice(*y_lim), slice(*x_lim), ...]
                h, w = flood_region.shape[:2]
                region_rect = np.array([0,0,w,h])
                flood_point = np.rint((region_rect[2:4] - 1) * flood_points[side]).astype(np.uint32)
                target_axes = axis2coords[axis_type[side]]
                long_dim = np.diff(region_rect[~target_axes])
                minLineLen = int((1.0 - edge_fraction * 2) * long_dim)
                maxLineGap = int(max_gap_frac * long_dim)
                thresh = int(minLineLen * min_edge_pix_frac)

                for flood_param in range(20):
                    mask = np.zeros((h+2,w+2 ...
(more)
edit flag offensive delete link more

Comments

how I can call your function I tried to write : iput=cv2.imread("border.png") ima=crop_border(iput, edge_fraction=0.25, min_edge_pix_frac=0.7, max_gap_frac=0.025, max_grad=1 / 40) cv2.imshow("imgCanny", ima)

cv2.waitKey()  # hold windows open until user presses a key
return

but it not work , can help ?

swan gravatar imageswan ( 2017-10-17 08:24:15 -0600 )edit
0

answered 2018-01-11 04:30:01 -0600

Mostafa Sataki's solution was really helpful. However I could not find any java implementation for the same and so decided to implement the same logic in java. So this answer is for anyone looking for the same solution implemented in Java.

Logical order of processing remains the same 1- compute Laplacian of image .

2 - compute horizontal & vertical projection

3 - evaluation of changes in both directions.

4 - find the maximum peak Find in the side of the gradient image.

I can gladly explain what I did here if anyone wants to know more :)

public static Mat borderSubtracter(String file_path) {
        try{
            Mat rawImage = Imgcodecs.imread(file_path);
            Mat originalImage = rawImage.clone();

            Imgproc.cvtColor(rawImage, rawImage, Imgproc.COLOR_BGR2GRAY);
            Mat sobelX = new Mat();
            Mat sobelY = new Mat();
                    //Sobel in X and Y planes
            Imgproc.Sobel(rawImage, sobelX, CvType.CV_32FC1, 2, 0 ,5 , 0.1, Core.BORDER_DEFAULT);
            Imgproc.Sobel(rawImage, sobelY, CvType.CV_32FC1, 0, 2 , 5 , 0.1, Core.BORDER_DEFAULT);

            Mat summedImage = new Mat();
            Core.add(sobelX, sobelY, summedImage);
            sobelX.release();sobelY.release();

            Mat normalized = new Mat();

            Core.normalize(summedImage, normalized,0,255,Core.NORM_MINMAX,CvType.CV_8UC1);
            summedImage.release();

            Mat reducedX = new Mat();
            Core.reduce(normalized, reducedX, 0, Core.REDUCE_AVG,CvType.CV_8UC1);

            Mat reducedY = new Mat();
            Core.reduce(normalized, reducedY, 1, Core.REDUCE_AVG,CvType.CV_8UC1);
            normalized.release();

            Mat reducedSobelX = new Mat();
            Imgproc.Sobel(reducedX, reducedSobelX, reducedX.depth(), 2, 0);
            reducedX.release();

            Mat reducedSobelY = new Mat();
            Imgproc.Sobel(reducedY, reducedSobelY, reducedX.depth(), 0, 2);
            reducedY.release();
            //finding maximum values in both x and y planes
            Point peak_point;
            int half_pos = (int) reducedSobelX.total() / 2;

            Rect result = new Rect();

            MinMaxLocResult mmr = Core.minMaxLoc(reducedSobelX.colRange(new Range(0,half_pos)));
            peak_point = mmr.maxLoc;
            result.x = (int) peak_point.x;
            mmr = Core.minMaxLoc(reducedSobelX.colRange(new Range(half_pos, (int) reducedSobelX.total())));
            peak_point = mmr.maxLoc;
            result.width = (int) ( peak_point.x + half_pos - result.x);


            half_pos = (int) reducedSobelY.total() / 2;
            mmr = Core.minMaxLoc(reducedSobelY.rowRange(new Range(0,half_pos)));
            peak_point = mmr.maxLoc;
            result.y = (int) peak_point.y;

            mmr = Core.minMaxLoc(reducedSobelY.rowRange(new Range(half_pos, (int) reducedSobelY.total())));
            peak_point = mmr.maxLoc;
            result.height = (int)(peak_point.y + half_pos - result.y);

            Imgproc.rectangle(originalImage, result.br(), result.tl(), new Scalar(0,255,0), 5);
                    return originalImage;
        }catch (Exception e){
            System.out.println(e.toString());
        }
edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2014-03-17 08:29:51 -0600

Seen: 17,496 times

Last updated: Mar 24 '14