Ask Your Question

object detection in nonuniform illumination

asked 2015-07-13 00:42:58 -0600

Siyad gravatar image

updated 2016-02-13 13:14:16 -0600

I am performing feature detection in a video/live stream/image using OpenCV C++. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images. The lighting condition in a particular portion of the video also changes over the course of the video. I tried the 'Histogram equalization' function, But it didn't help. I got a working solution in MATLAB in the following link

But I'm not aware of MATLAB functions so I can't use this function in OpenCV.

Can you suggest the alternative of this MATLAB code in OpenCV C++ ?

Thanks in Advance...

edit retag flag offensive close merge delete



apart from the adaptiveThreshold above, have a look at CLAHE and retina (!!!)

berak gravatar imageberak ( 2015-07-13 01:12:26 -0600 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2015-12-11 06:39:19 -0600

pklab gravatar image

updated 2015-12-16 13:50:54 -0600

Morphology OPEN can detects bright structures larger that a given size. If you consider large structures as background you can use OPEN to detect background than remove it from the original image. This is same as to do MORPH_TOPHAT. Below is a simple function to do this.

This is the result on a simple image (source thanks to and result)

Source Result

Test on complex image is here while this is the code:

[EDIT] corrected a small error

/** @brief Remove non-uniform illumination using morphology
Morphology OPEN can detects bright structures larger that a given size.
If you consider large structures as background you can use OPEN
to detect background than remove it from the original image.
This is same as to do MORPH_TOPHAT.
@Param [in]src input image GRAY, BGR or BGRA.
With BGR(A) image this function uses Brightness from image HSV.
@Param [out]dst destination image. If alpha channel is present in src it will be cloned in dst
@Param minThickess size used by morphology operation to estimate background. Use small size to
enhance details flatting larger structures.
@c minThickess should be just larger than maximum thickness in objects you want to keep.
- Take thickest object, suppose is circle 100 * 100px
- Measure its maximum thickness let's say is 20px: In this case @c minThickess could be 20+5.
- If the circle is filled than thickness=diameter, consequently @c minThickess should be 100+5px
@Param restoreMean if true, the mean of input brightness will be restored in destination image.
if false, the destination brightness will be close to darker region of input image.
@Param [out]background if not NULL the removed background will be returned here.
This will be Mat(src.size(),CV_8UC1)
void NonUniformIlluminationMorph(const cv::Mat &src, cv::Mat &dst, int minThickess = 5, bool restoreMean = true, cv::Mat *background=NULL)
    CV_Assert(minThickess >= 0);
    CV_Assert((src.type() == CV_8UC1) || (src.type() == CV_8UC3) || (src.type() == CV_8UC4));
    cv::Mat brightness, src_hsv;
    vector<cv::Mat> hsv_planes;

    if (src.type() == CV_8UC1)
    else if (src.type() == CV_8UC3)
        cv::cvtColor(src, src_hsv, cv::COLOR_BGR2HSV);
        cv::split(src_hsv, hsv_planes);
        brightness = hsv_planes[2];
    else if (src.type() == CV_8UC4)
        cv::cvtColor(src, src_hsv, cv::COLOR_BGRA2BGR);
        cv::cvtColor(src_hsv, src_hsv, cv::COLOR_BGR2HSV);
        cv::split(src_hsv, hsv_planes);
        brightness = hsv_planes[2];

    //to restore previous brightness we need its current mean
    Scalar m;
    if (restoreMean)
        m = mean(brightness);

    int size = minThickess / 2;
    Point anchor = Point(size, size);
    Mat element = getStructuringElement(MORPH_ELLIPSE, Size(2 * size + 1, 2 * size + 1), anchor);
    if (background != NULL) // to keep background we need to use MORPH_OPEN
        //get the background
        cv::Mat bkg(brightness.size(), CV_8UC1);
        morphologyEx(brightness, bkg, MORPH_OPEN, element, anchor);
        //save the background
        (*background) = bkg;
        //remove the background
        brightness = brightness - bkg;
    else //tophat(I)  <=> open(I) - I; 
        //remove background
        morphologyEx(brightness, brightness, MORPH_TOPHAT, element, anchor);

    if (restoreMean)
        brightness += m(0);

    if (src.type() == CV_8UC1)
        dst = brightness;
    else if (src.type() == CV_8UC3)
        merge(hsv_planes, dst);
        cvtColor(dst, dst, COLOR_HSV2BGR);
    // restore alpha channel ...
edit flag offensive delete link more


This is pure C pointer dereference ! Search the net for "Pointer Basics with C"

pklab gravatar imagepklab ( 2015-12-15 06:09:31 -0600 )edit

@pklab hello again,the problem occurs on "// BUILD THE DESTINATION" part of code. my source type Cv_8uc4 and its pointless(in my case,because will not enter to this conditions,what can i do?).
Also this line: brightness += m(0); i change with Core.Add(mat,scalar,mat); ,is this ok? Thank you!

VeTaLio gravatar imageVeTaLio ( 2015-12-16 08:28:15 -0600 )edit

@pklab the result is black image. I'm doing something wrong?

VeTaLio gravatar imageVeTaLio ( 2015-12-17 02:43:08 -0600 )edit

@VeTaLio check the code, on "// BUILD THE DESTINATION" part it should be else if (src.type()...

pklab gravatar imagepklab ( 2015-12-17 06:01:00 -0600 )edit

@pklab yes,i have made this(almost yesterday tried) and result is always fully black image.

VeTaLio gravatar imageVeTaLio ( 2015-12-17 06:03:15 -0600 )edit

answered 2015-12-16 15:45:29 -0600

LBerger gravatar image

I like quadric :

int main( int argc, char** argv )
    Mat z    = imread("1449862093156643.jpg",CV_LOAD_IMAGE_GRAYSCALE);

    Mat M = Mat_<double>(z.rows*z.cols,6);
    Mat I=Mat_<double>(z.rows*z.cols,1);
    for (int i=0;i<z.rows;i++)
        for (int j = 0; j < z.cols; j++)
            double x=(j - z.cols / 2) / double(z.cols),y= (i - z.rows / 2) / double(z.rows);
  <double>(i*z.cols+j, 0) = x*x;
  <double>(i*z.cols+j, 1) = y*y;
  <double>(i*z.cols+j, 2) = x*y;
  <double>(i*z.cols+j, 3) = x;
  <double>(i*z.cols+j, 4) = y;
  <double>(i*z.cols+j, 5) = 1;
  <double>(i*z.cols+j, 0) =<uchar>(i,j);
    SVD s(M);
    Mat q;
    Mat background(z.rows,z.cols,CV_8UC1);
    for (int i=0;i<z.rows;i++)
        for (int j = 0; j < z.cols; j++)
            double x=(j - z.cols / 2) / double(z.cols),y= (i - z.rows / 2) / double(z.rows);
  <uchar>(i,j) = saturate_cast<uchar>(quad);
    imshow("Simulated background",background);
    Mat diff;
    double mind,maxd;

    return 0;

results are simulated background image description

and difference bakground original is image description

With last image you can make hypothesis that bias is due camera gain...

edit flag offensive delete link more


Quadratic is great because doesn't depends from objects size even if it's up to 4x slower.

Background estimation could be done in many different way. See here and here for some examples. My implementation is answer to user who asked for OpenCV code using morphological filtering :)

pklab gravatar imagepklab ( 2015-12-17 05:00:03 -0600 )edit

Yes your answer is good problem many answers! About speed to reduce speed only 1% of pixels can be used and i dont think that coefficients would be wrong. You can fond other methods here

LBerger gravatar imageLBerger ( 2015-12-17 10:01:54 -0600 )edit

really nice answers from both @LBerger and @pklab, I came upon them now and I am quite excited with the new stuff that I learned :-p...

theodore gravatar imagetheodore ( 2016-02-11 03:35:50 -0600 )edit

I have to say that performance of top_hat method decreases fast when the kernel size increases (minThickess in my function). Bigger objects in the foreground will requires a bigger kernel than slower performance. This doesn't occurs with the quadratic method because its performance depends only from image size. In addition top_hat doesn't work when objects has holes or convexity. Thank you @LBerger for the code of quadric !

pklab gravatar imagepklab ( 2016-02-11 04:14:15 -0600 )edit

@LBerger and @pklab I would be interested to extract the illumination map of a non-uniform indoor scene lit by multiple light sources, considering that you might have come upon something similar in the past. Do you have something in mind to propose?

theodore gravatar imagetheodore ( 2016-02-12 06:18:51 -0600 )edit

thanks for your reply, Can the quadric method successfully replace the morphologyex function? Since there is no support of morphologyEx in Opencv 3.0 and above with gpu(cuda), I need an exact alternative of the function. Also what is its alternative in cuda filters which can use for background subtraction in non uniform illumination

Siyad gravatar imageSiyad ( 2016-02-12 12:10:04 -0600 )edit

@theodore In the past I have processed image with one light ( microscope light) but with variable spatial transmission

LBerger gravatar imageLBerger ( 2016-02-12 12:18:09 -0600 )edit

@Siyad have a look here

LBerger gravatar imageLBerger ( 2016-02-12 13:05:43 -0600 )edit

@LBerger I came upon these two papers here and here which from a first view they seem quite interesting. I guess you do not have any experience with such cases, right?

theodore gravatar imagetheodore ( 2016-02-12 14:08:08 -0600 )edit

Exact no experience. You can try this may be it will give you idea

LBerger gravatar imageLBerger ( 2016-02-13 12:10:16 -0600 )edit

Question Tools

1 follower


Asked: 2015-07-13 00:42:58 -0600

Seen: 11,997 times

Last updated: Dec 16 '15