Technique to introduce normalisation/consistency to std dev comparison?

asked 2018-03-24 03:52:50 -0600

sazr gravatar image

I am implementing a very simple segmentation algorithm for single channel images. The algorithm works like so:

For a single channel image:

  1. Calculate the standard deviation, ie, measure how much the luminosity varies across the image.
  2. If the stddev > 15 (aka threshold):

    • Divide the image into 4 cells/images
    • For each cell:
      • Repeat step 1 and step 2 (go recursive)

    Else:

    • Draw a rectangle on the source image to signify a segment lies in these bounds.

My problem occurs because my threshold is constant and when I go recursive 15 is not longer a good signifier of whether that image is homogeneous or not. How can I introduce consistency/normalisation to my homogeneity check?

Should I resize each image to the same size (100x100)? Should my threshold be formula? Say 15 / img.rows * img.cols or 15 / MAX_HISTOGRAM_PEAK?

edit retag flag offensive close merge delete

Comments

May be method is not good : read “GrabCut” — Interactive Foreground Extraction using Iterated Graph Cuts

15 is for a window of N pixels It means 15+/- s/sqrt(N). Accuracy for threshold s/sqrt(Number of pixels)

LBerger gravatar imageLBerger ( 2018-03-24 04:46:37 -0600 )edit