LED Blinking Frequency

asked 2015-07-04 03:15:58 -0600

Prem gravatar image

updated 2015-07-04 05:25:48 -0600

C:\fakepath\frame12.jpgI decided not to update my old post because it already has so many comments. Here is the program I wrote to detect blinking LEDs. It works so so when the surrounding is a bit dark and doesn't work at all when its bright out there. I have been given some suggestion to improve the efficiency like pre allocating but I think I need to work on the logic as well. Kindly guide me how can I detect the position and of blinking LED? Camera frame rate is 90fps and blinking frequency is 45Hz and there are more than one LEDs in the frame. Attached are two frames in a bright light condition. here is the logic 1. Setup camera parameters to make it 90fps 2. Quickly capture 30 frames and compute the difference and threshold of difference of the frames 3. Find contour centers in the the threshold image 4. organize contours in a R*tree and check the frequency of contour centers in user defined neighborhood. 5. If the count falls within the frequency and tolerance range. Predict the point to be LED light.

Kindly guide me to modify this code so that it works in bright light conditions and the success rate of detecting LED is high.C:\fakepath\frame11.jpg(/upfiles/14359976621521109.jpg).

As suggested the question seems to be too long. I am trying to get the difference between two frames and threshold the difference, check for contours and then check the frequency of contour center to detect the light. Following function accepts N number of images and does as explained. I need this to work in all light scenario, it is working in low light environment only. Kindly guide me how can I modify the code to make it work in any scenario.

       const static int SENSITIVITY_VALUE = 50;
      const static int BLUR_SIZE = 6;


       void getThresholdImage(vector<Mat>  &framesToProcess, vector<Mat> &thresholdImages, vector<Mat> &differenceImages)
 {
   vector<Mat> grayImage;

for (int i = 0; i < framesToProcess.size(); i++)
{
    Mat tempMatImage, tempGrayImage;

    resize(framesToProcess[i], tempMatImage, Size(600, 800));
    cvtColor(tempMatImage, tempGrayImage, COLOR_BGR2GRAY);
    grayImage.push_back(tempGrayImage);

    if (i > 0)
    {
        Mat tempDifferenceImage, tempThresholdImage;
        absdiff(grayImage[i - 1], grayImage[i], tempDifferenceImage);
        imshow("difference Image", tempDifferenceImage);
        //erode(tempDifferenceImage, tempDifferenceImage, Mat(), Point(-1, -1), 2, BORDER_CONSTANT);
        differenceImages.push_back(tempDifferenceImage);
        threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
        imshow("before blur", tempThresholdImage);
        blur(tempThresholdImage, tempThresholdImage, Size(BLUR_SIZE, BLUR_SIZE));
        imshow("After BlurThreshold Image", tempThresholdImage);
        thresholdImages.push_back(tempThresholdImage);
    }
}
}
edit retag flag offensive close merge delete

Comments

Happy to see that things are going on. May be your post is too long but I am not moderator

Now I think you must forget image processing and think only signal processing. Instead of circle(cameraFeed, LEDPoints[i], 20, Scalar(0, 255, 0), 2); you can print values frameToProcess.at<vec3b>(LEDPoints[0]) for 30 images. I suppose that nothing is moving first so print values must be (255,255,255) when led is falsing and smaller value in other case.

PS why are you working in colors (CV_8UC3)?

I think that you are wasting time allocated memory here tempImage = currentFrame.clone(); Asking memory to system takes time

It seems that your capture is in format RGB. yes opencv is in BGR but is it important to make copy for this? (You want frequency and not an image withright color)

LBerger gravatar imageLBerger ( 2015-07-04 04:02:42 -0600 )edit

@LBerger I want to quickly capture N frames and process and display the co-ordinates back in live feed. The circle is the identified spot of LED. Function getThresholdImage(framesToProcess, thresholdImages, differenceImages); is not working as I want. There appears a faint image in difference image which is suppressed when I compute threshold and Blur. Improvements noted:

  1. Pre-allocate memory where ever possible.
  2. Capture and process gray images directly.

If I don't store the captured images there will be a delay in processing each frame which in turn will delay capturing image. Hence I tried capturing 30 frames first then do other

I don't want to write anything to all thirty frames of frameToProcess, I want to spot LED light in live feed once it is identified in framesToProcess

Prem gravatar imagePrem ( 2015-07-04 04:30:11 -0600 )edit

@LBerger can you please suggest how to capture gray image directly from camera in my context. basically I would like to optimize following code

Image rgbImage;
        Mat tempImage;
        rawImage.Convert(FlyCapture2::PIXEL_FORMAT_BGR, &rgbImage);

        // convert to OpenCV Mat
        unsigned int rowBytes = (double)rgbImage.GetReceivedDataSize() / (double)rgbImage.GetRows();
        currentFrame = Mat(rgbImage.GetRows(), rgbImage.GetCols(), CV_8UC3, rgbImage.GetData(), rowBytes);
        tempImage = currentFrame.clone();
        framesToProcess.push_back(tempImage);
Prem gravatar imagePrem ( 2015-07-07 02:18:22 -0600 )edit