Ask Your Question

Prem's profile - activity

2019-08-20 13:21:24 -0600 received badge  Notable Question (source)
2018-05-22 09:32:13 -0600 received badge  Popular Question (source)
2018-05-02 07:09:12 -0600 received badge  Popular Question (source)
2015-07-27 20:50:33 -0600 commented answer Infrared Led tracking with frequency

Hi, I am working on similar project. Did you figure out the solution?

2015-07-27 20:35:48 -0600 asked a question exclude moving objects in camera frame

I have some LEDs which are blinking at a frequency half the frame rate of camera. Which means ideally in one frame LED is on and in another frame the LED is off. I need to find the positions of LED lights in camera feed. I capture N number of frames in grayscale mode, compute absolute difference between the frames, threshold the resulting image and check for contours. This algorithm works fine when there is no object moving in the camera feed. If some object is moving this algorithm fails. Can any one please suggest what can I do to exclude moving objects in this algorithm? If there is another way ( completely different and better ) to detect the position of LED lights please suggest that as well. Here is my code to detect the LED positions in camera frame. ( where framesToProcess is N number of frames captured quickly and contourCenter is indexed Cartesian coordinates that are candidate for LED points.)

void getContourCenters(vector<Mat>  &framesToProcess, vector<pointI>& contourCenter)
{
size_t j = 0;

for (int i = 1; i < framesToProcess.size(); i++)
{



        Mat tempDifferenceImage, tempThresholdImage;
        vector< vector<Point> > contours;
        vector<Vec4i> hierarchy;
        Rect objectBoundingRectangle = Rect(0, 0, 0, 0);
        absdiff(framesToProcess[i - 1], framesToProcess[i], tempDifferenceImage);
        threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
        blur(tempThresholdImage, tempThresholdImage, Size(BLUR_SIZE, BLUR_SIZE));
        findContours(tempThresholdImage, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);

        for (int k = 0; k < contours.size(); ++k)
        {
            objectBoundingRectangle = boundingRect(contours[k]);
            int xpos = objectBoundingRectangle.x + objectBoundingRectangle.width / 2;
            int ypos = objectBoundingRectangle.y + objectBoundingRectangle.height / 2;
            contourCenter.push_back(mp(xpos, ypos, j++));
        }
    }

}

2015-07-20 11:39:06 -0600 commented question difference between blur and dilate

Actually i am using blur operation to smoothen the image. Strangly gpu::blur is taking more time than c::blur hence i was planning to replace it with dilate. But couldnt exactly figure out their effect on same image.

2015-07-20 11:25:20 -0600 commented question difference between blur and dilate

@sturkmen yes. Can you please explain what do they do to an image differently? In layman's term

2015-07-20 08:49:26 -0600 asked a question difference between blur and dilate

can anyone please explain the difference between blur and dilate ? mathematical as well as layman explanation would be really nice

2015-07-20 04:27:24 -0600 asked a question blur taking significant time on GPU

Here is the CPU version of a function

void getContourCenters(vector<Mat>  &framesToProcess, vector<pointI>& contourCenter)
{    
    size_t j = 0;       
    for (int i = 1; i < framesToProcess.size(); i++)
    {    
                    Mat tempDifferenceImage, tempThresholdImage;
                    vector< vector<Point> > contours;
                    vector<Vec4i> hierarchy;
                    Rect objectBoundingRectangle = Rect(0, 0, 0, 0);
                    absdiff(framesToProcess[i - 1], framesToProcess[i], tempDifferenceImage);
                    threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
                    blur(tempThresholdImage, tempThresholdImage, Size(BLUR_SIZE, BLUR_SIZE));
                    findContours(tempThresholdImage, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
                    cout << "Time to findContours: " << t1.elapsed() << endl;
                    t1.restart();
                    for (int k = 0; k < contours.size(); ++k)
                    {
                            objectBoundingRectangle = boundingRect(contours[k]);
                            int xpos = objectBoundingRectangle.x + objectBoundingRectangle.width / 2;
                            int ypos = objectBoundingRectangle.y + objectBoundingRectangle.height / 2;
                            contourCenter.push_back(mp(xpos, ypos, j++));
                    }           

    }
}

This function takes about 1.5 seconds to execute for 30 grayscale images. Now I optimized this code for GPU

  void getContourCenters(vector<gpu::GpuMat>  &framesToProcess, vector<pointI>& contourCenter)
{
    size_t j = 0;

    for (int i = 1; i < framesToProcess.size(); i++)
    {

            gpu::GpuMat tempDifferenceImage, tempThresholdImage, tempBlurredImage;
            vector< vector<Point> > contours;
            vector<Vec4i> hierarchy;
            Rect objectBoundingRectangle = Rect(0, 0, 0, 0);
            gpu::absdiff(framesToProcess[i - 1], framesToProcess[i], tempDifferenceImage);
            gpu::threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
           //If i comment following line the function works fine and executes in 0.5 second but if I uncomment  
            // following line it takes more than 30 seconds to execute the function 
            gpu::blur(tempThresholdImage, tempBlurredImage, Size(BLUR_SIZE, BLUR_SIZE));
            Mat contourImage( tempBlurredImage );
            findContours(contourImage, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
            for (int k = 0; k < contours.size(); ++k)
            {
                    objectBoundingRectangle = boundingRect(contours[k]);
                    int xpos = objectBoundingRectangle.x + objectBoundingRectangle.width / 2;
                    int ypos = objectBoundingRectangle.y + objectBoundingRectangle.height / 2;
                    contourCenter.push_back(mp(xpos, ypos, j++));
            }
    }

}

The GPU version of code takes about 0.5 second to execute when the function gpu::blur is commented but if this line is uncommented it takes more than 30 seconds or sometimes more ( i don't have that much patience so i kill the process ) . Can anyone point what is the problem with this code? Thank you in advance.

2015-07-19 09:54:08 -0600 commented question Opencv android warpprespective

Here is the code that reads repImage from drawable written inside onCameraViewStarted

try {
        repImage = Utils.loadResource(this, R.drawable.ring, Highgui.CV_LOAD_IMAGE_COLOR);
    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
    Imgproc.cvtColor( repImage,  repImage,  Imgproc.COLOR_BGR2RGBA);`enter code here`
2015-07-19 08:18:13 -0600 asked a question Opencv android warpprespective

Here is my c++, opencv code, which is basically overlaying an image at specified position on cameraFeed.

Mat transmix = getPerspectiveTransform(imagePoints, newLEDPoints);
                warpPerspective(repImage, cameraFeed, transmix, cameraFeed.size(), cv::INTER_LINEAR,   cv::BORDER_TRANSPARENT);

Here imagePoints and newLEDPoints are two vectors containing exactly four Cartesian points in proper order. I am trying to achieve similar effect in android, opencv here is the code

public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
    // TODO Auto-generated method stub
    Log.i(TAG, "called onCameraFrame");
    cameraFeed = inputFrame.rgba();
    Point center = new Point(cameraFeed.width()/2, cameraFeed.height()/2);
    Point topLeft = new Point( center.x - 100, center.y - 100 );
    Point topRight = new Point( center.x + 100, center.y - 100);
    Point bottomRight = new Point( center.x + 100, center.y + 100 );
    Point bottomLeft = new  Point( center.x - 100, center.y + 100 );
    List<Point> LEDPoints = new ArrayList<Point>();
    LEDPoints.add( topLeft );
    LEDPoints.add( topRight );
    LEDPoints.add( bottomRight );
    LEDPoints.add( bottomLeft );
    Log.i(TAG, "Before LEDPoint Conversion");
    Mat LEDPointss = Converters.vector_Point2f_to_Mat( LEDPoints );

    List<Point> imagePoints = new ArrayList<Point>();
    imagePoints.add( new Point( 0, 0));
    imagePoints.add( new Point( repImage.width(), 0 ));
    imagePoints.add( new Point( repImage.width(), repImage.height()));
    imagePoints.add( new Point( 0, repImage.height()));
    Log.i(TAG, "Before imagePoint Conversion");
    Mat imagePointss = Converters.vector_Point2f_to_Mat( imagePoints );

    Log.i(TAG, "Before transmix");
    Mat transmix = Imgproc.getPerspectiveTransform(imagePointss, LEDPointss);

    Log.i(TAG, "Before warp");

    if ( !repImage.empty())
    {
        Imgproc.warpPerspective(repImage, 
            cameraFeed,
            transmix,
            cameraFeed.size(), 
            Imgproc.INTER_LINEAR);
    }
    else
    {
        Log.i(TAG, "repImage is empty");
    }
    //Scalar color = new Scalar( 0, 255, 0 );
    //Core.circle(rgba, center, 10, color, 2);

    return cameraFeed;
}

I am just getting repImage on screen when I run this on my android tablet. I actually want repImage to appear on cameraFeed at stipulated co-ordinates ( as my c++ code does ) Kindly guide me what am I missing ? or what am I doing wrong?

2015-07-15 22:40:40 -0600 received badge  Student (source)
2015-07-15 06:53:01 -0600 commented question play video within video

@Lorena GdL this is part of an AR solution which needs to do perspective transformation. I identify the coordinates of markers in Live camera feed (newLEDPoints in code) and need to display additional information ( image replace1.jpg) at the position defined by markers. This is working fine. Now I would like to play a video within live camera frame at position defined by vertices of rectangle stored in newLEDPoints. I have added image in the question for clarity As you can see four points have been identified in Live Camera Feed and an image has been displayed after perspective transformation. If I would like to play a .avi file instead of displaying simple image on top of live camera feed, how can I do so?

2015-07-15 03:22:39 -0600 asked a question play video within video

Here is my code which overlays an image ( replace1.jpg) on live camera feed. how can I play a video ( say .avi ) file in the specified area ? or what if I would like to play a sound ?

       Mat repImage = imread("replace1.jpg");
                vector<Point2f> imagePoints = { Point2f(0, 0), Point2f(repImage.cols, 0),                     Point2f(repImage.cols, repImage.rows), Point2f(0, repImage.rows), };

                Mat transmix = getPerspectiveTransform(imagePoints, newLEDPoints);
                warpPerspective(repImage, cameraFeed, transmix, cameraFeed.size(), cv::INTER_LINEAR, cv::BORDER_TRANSPARENT);

If I would like to play replace1.avi instead of displaying replace1.jpg how can it be possible?image description

2015-07-11 09:13:12 -0600 commented question Find Point Distance

so you know the coordinates of red spot? is that line parallel to y axis ( the red line passing through red spot which represents the distance) ? or you know the angle at which the red line intersects black line? this is not the opencv or programming question, this is an algebra question where you need to find some logic to find the intersecting points ( of red and black line ) then you can simply compute the distance between the points ( in turn line ).

2015-07-08 07:16:00 -0600 received badge  Supporter (source)
2015-07-08 05:39:07 -0600 commented question getprespectivetransform or alternatives which one is better

I need exactly what happens at matrix level, so far couldn't find it over google.

2015-07-08 04:46:03 -0600 commented question getprespectivetransform or alternatives which one is better

@thdrksdfthmn thanks for the explanation. Can you please explain how transmix (in my code) is computed in matrix format? or provide some URL which explains the calculation in simple form with an example.

2015-07-08 03:53:32 -0600 asked a question getprespectivetransform or alternatives which one is better

I have identified a rectangular area in my image space ( programmetically I have fuor points in a vector representing vertices of a rectangle or quadrilateral ) in live camera feed. The shape is unknown in advance but it is known that it is a polygon with 4 vertices. I would like to display an image within that area. Here is my code.

 vector<Point2f> knownLiveFeedPoints; // this vector contains four Point2f Coordinates
 Mat repImage = imread("replace1.jpg");
vector<Point2f> imagePoints = { Point2f(0, 0), Point2f(repImage.cols, 0),
                                         Point2f(repImage.cols, repImage.rows), Point2f(0, repImage.rows), };

Mat transmix = getPerspectiveTransform(imagePoints, knownLiveFeedPoints);
warpPerspective(repImage, cameraFeed, transmix, cameraFeed.size(), cv::INTER_LINEAR, cv::BORDER_TRANSPARENT);

This is working fine. Here I would like to know what is being done by getPrespectiveTransform ( would really appreciate a sample matrix calculation ) and wrapPrespective ? I went through openCV documentation many times but got lost in many alternatives and generic explanation. Can I achieve exactly same functionality using findHomography() ? what would be the difference ? Thanks in advance.

2015-07-08 03:40:06 -0600 commented question waitkey alternate

@berak thanks for the info. Can you please mark it as answer.

2015-07-07 08:58:32 -0600 asked a question waitkey alternate

My code is already slow and processing takes some time once some frame is captured and processed from camera. When I use imshow ( "name", processedframe ). i don't want to wait even for 1 ms and continue to next loop. If I don't include waitkey(somevalue) after imshow the window is gray. Is there any solution for this? or i must use waitkey to display the feed correctly.

2015-07-07 02:18:22 -0600 commented question LED Blinking Frequency

@LBerger can you please suggest how to capture gray image directly from camera in my context. basically I would like to optimize following code

Image rgbImage;
        Mat tempImage;
        rawImage.Convert(FlyCapture2::PIXEL_FORMAT_BGR, &rgbImage);

        // convert to OpenCV Mat
        unsigned int rowBytes = (double)rgbImage.GetReceivedDataSize() / (double)rgbImage.GetRows();
        currentFrame = Mat(rgbImage.GetRows(), rgbImage.GetCols(), CV_8UC3, rgbImage.GetData(), rowBytes);
        tempImage = currentFrame.clone();
        framesToProcess.push_back(tempImage);
2015-07-06 04:55:33 -0600 asked a question identify cluster

I have following vector

{Point(100, 200), Point( 101, 202 ), Point ( 200, 200 ), ( 201, 202 ), ( 203, 204 ), Point ( 100, 400 ), Point( 102, 402 ), Point ( 200, 400), Point ( 202, 401 ), ( 205, 405 ) }

The vector contains vertices of a rectangle and some neighboring points of vertices. I need to extract the rectangle vertices from these points. means for Point ( 100, 200 ) and ( 101, 102 ) I just need one of them. Then for Points ( 200, 200 ), ( 201, 202 ) , ( 203, 204 ) I just need one point ( may be average or center of these neighbors ) and so forth. It may be a triangle with similar distribution or just a line with two groups or a point with a single group. Kindly guide me how can I achieve this? Should I use Kmeans or if yes how? if not is there any other clustring algorithm to solve this issue.

2015-07-04 04:30:11 -0600 commented question LED Blinking Frequency

@LBerger I want to quickly capture N frames and process and display the co-ordinates back in live feed. The circle is the identified spot of LED. Function getThresholdImage(framesToProcess, thresholdImages, differenceImages); is not working as I want. There appears a faint image in difference image which is suppressed when I compute threshold and Blur. Improvements noted:

  1. Pre-allocate memory where ever possible.
  2. Capture and process gray images directly.

If I don't store the captured images there will be a delay in processing each frame which in turn will delay capturing image. Hence I tried capturing 30 frames first then do other

I don't want to write anything to all thirty frames of frameToProcess, I want to spot LED light in live feed once it is identified in framesToProcess

2015-07-04 03:15:58 -0600 asked a question LED Blinking Frequency

C:\fakepath\frame12.jpgI decided not to update my old post because it already has so many comments. Here is the program I wrote to detect blinking LEDs. It works so so when the surrounding is a bit dark and doesn't work at all when its bright out there. I have been given some suggestion to improve the efficiency like pre allocating but I think I need to work on the logic as well. Kindly guide me how can I detect the position and of blinking LED? Camera frame rate is 90fps and blinking frequency is 45Hz and there are more than one LEDs in the frame. Attached are two frames in a bright light condition. here is the logic 1. Setup camera parameters to make it 90fps 2. Quickly capture 30 frames and compute the difference and threshold of difference of the frames 3. Find contour centers in the the threshold image 4. organize contours in a R*tree and check the frequency of contour centers in user defined neighborhood. 5. If the count falls within the frequency and tolerance range. Predict the point to be LED light.

Kindly guide me to modify this code so that it works in bright light conditions and the success rate of detecting LED is high.C:\fakepath\frame11.jpg(/upfiles/14359976621521109.jpg).

As suggested the question seems to be too long. I am trying to get the difference between two frames and threshold the difference, check for contours and then check the frequency of contour center to detect the light. Following function accepts N number of images and does as explained. I need this to work in all light scenario, it is working in low light environment only. Kindly guide me how can I modify the code to make it work in any scenario.

       const static int SENSITIVITY_VALUE = 50;
      const static int BLUR_SIZE = 6;


       void getThresholdImage(vector<Mat>  &framesToProcess, vector<Mat> &thresholdImages, vector<Mat> &differenceImages)
 {
   vector<Mat> grayImage;

for (int i = 0; i < framesToProcess.size(); i++)
{
    Mat tempMatImage, tempGrayImage;

    resize(framesToProcess[i], tempMatImage, Size(600, 800));
    cvtColor(tempMatImage, tempGrayImage, COLOR_BGR2GRAY);
    grayImage.push_back(tempGrayImage);

    if (i > 0)
    {
        Mat tempDifferenceImage, tempThresholdImage;
        absdiff(grayImage[i - 1], grayImage[i], tempDifferenceImage);
        imshow("difference Image", tempDifferenceImage);
        //erode(tempDifferenceImage, tempDifferenceImage, Mat(), Point(-1, -1), 2, BORDER_CONSTANT);
        differenceImages.push_back(tempDifferenceImage);
        threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
        imshow("before blur", tempThresholdImage);
        blur(tempThresholdImage, tempThresholdImage, Size(BLUR_SIZE, BLUR_SIZE));
        imshow("After BlurThreshold Image", tempThresholdImage);
        thresholdImages.push_back(tempThresholdImage);
    }
}
}
2015-07-02 10:46:01 -0600 received badge  Scholar (source)
2015-07-02 09:23:46 -0600 commented question Quickly capture N frames and continue with live feed

Thanks a lot guys. @LBerger as your solution of clone worked perfectly. Would you please add it as answer to this question. If needed I would post a new question.

2015-07-02 06:35:14 -0600 commented question Quickly capture N frames and continue with live feed

@LBerger thanks for saving me clone is working. But I am getting real performance issue now. Any tips on optimization of this code? @pklab thanks for tips.

2015-07-02 04:49:01 -0600 commented question Quickly capture N frames and continue with live feed

currentFrame is added to the vector. does the vector only contain the reference as well? This works perfectly fine if I include my code to process each frame within for loop. but this will delay the image capture. What can be the workaround for this?

2015-07-02 04:34:46 -0600 commented question Quickly capture N frames and continue with live feed

Here is the runtime error I get First-chance exception at 0x00007FFF1EB42262 (opencv_core2410d.dll) in LearnCPP11.exe: 0xC0000005: Access violation reading location 0x00000028F52A2840. Unhandled exception at 0x00007FFF1EB42262 (opencv_core2410d.dll) in LearnCPP11.exe: 0xC0000005: Access violation reading location 0x00000028F52A2840.

2015-07-02 04:30:51 -0600 asked a question Quickly capture N frames and continue with live feed

I would like to capture N number of images process them and do something with live feed from the camera. I start camera, capture 30 frames and store them in a Mat vector. Now when I try to access or process the vector I am getting runtime error. I am using pointgray camera. I think I am missing something really basic. Kindly guide me what could be wrong?

      #include <opencv2\highgui\highgui.hpp>
      #include <opencv2\core\core.hpp>
      #include <opencvlibpath.h>
      #include <FlyCapture2.h>
      #include <vector>

      using namespace cv;
      using namespace FlyCapture2;
      using namespace std;

        const static int NUMBER_OF_FRAME_CAPTURE = 30;
       const static int SENSITIVITY_VALUE = 60;
       const static int BLUR_SIZE = 50;


       //I would like to execute this function after quickly capturing 30 frames
        void getThresholdImage(vector<Mat>  &framesToProcess, vector<Mat> &thresholdImages)
       {
      vector<Mat> grayImage;

for (int i = 0; i < framesToProcess.size(); i++)
{
    Mat tempMatImage, tempGrayImage;

    resize(framesToProcess[i], tempMatImage, Size(600, 800));
    cvtColor(tempMatImage, tempGrayImage, COLOR_BGR2GRAY);
    grayImage.push_back(tempGrayImage);

    if (i > 0)
    {
        Mat tempDifferenceImage, tempThresholdImage;
        absdiff(grayImage[i - 1], grayImage[i], tempDifferenceImage);
        threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
        blur(tempThresholdImage, tempThresholdImage, Size(BLUR_SIZE, BLUR_SIZE));
        thresholdImages.push_back(tempThresholdImage);
    }
}
 }

  int main()
 {
Mat cameraFeed;

vector<Image> rawImages;
vector<Mat> framesToProcess;
vector<Mat> thresholdImages;
//vector<point> contourCenters;
vector<Point> LEDPoints;

Camera camera;
Error error;

error = camera.Connect(0);
if (error != PGRERROR_OK)
{
    cout << "Failed to connect to camera" << endl;
    getchar();
    exit(1);
}
error = camera.StartCapture();
if (error == PGRERROR_ISOCH_BANDWIDTH_EXCEEDED)
{
    cout << "Bandwidth exceeded" << endl;
    getchar();
    exit(1);
}
else if (error != PGRERROR_OK)
{
    cout << "Failed to start image capture" << endl;
    getchar();
    exit(1);
}

while (1){

    framesToProcess.clear();
    thresholdImages.clear();
    rawImages.clear();

    //quickly capture 30 images
    for (int i = 0; i < NUMBER_OF_FRAME_CAPTURE; i++)
    {
        Mat currentFrame;
        Image rawImage;
        Error error = camera.RetrieveBuffer(&rawImage);
        if (error != PGRERROR_OK)
        {
            cout << "capture error" << endl;
        }

        // convert to rgb
        Image rgbImage;
        rawImage.Convert(FlyCapture2::PIXEL_FORMAT_BGR, &rgbImage);

        // convert to OpenCV Mat
        unsigned int rowBytes = (double)rgbImage.GetReceivedDataSize() / (double)rgbImage.GetRows();
        currentFrame = Mat(rgbImage.GetRows(), rgbImage.GetCols(), CV_8UC3, rgbImage.GetData(), rowBytes);
        framesToProcess.push_back(currentFrame);
        //cvtColor(currentFrame, currentFrame, COLOR_BGR2GRAY);
        //imshow("GRAY", currentFrame);
    }

    // this line returns 30
    cout << "Frames to process" << framesToProcess.size() << endl;



    // I get runtime error while calling this function  
    getThresholdImage(framesToProcess, thresholdImages);

    //Then I just tried displaying image from captured Mat Vector above with
    //imshow("frame", framesToProcess[0]);
    // Still I get runtime error which means it populates the vector with images but it is not accessible

    // Following section tries to continue with live feed. 
    //Already tried commenting this section but doesn't have any effect
    Image rawImage;
    Error error = camera.RetrieveBuffer(&rawImage);
    if (error != PGRERROR_OK)
    {
        cout << "capture error" << endl;
    }

    // convert to rgb
    Image rgbImage;
    rawImage.Convert(FlyCapture2::PIXEL_FORMAT_BGR, &rgbImage);

    // convert to OpenCV Mat
    unsigned int rowBytes = (double)rgbImage.GetReceivedDataSize() / (double)rgbImage.GetRows();
    cameraFeed = Mat(rgbImage.GetRows(), rgbImage.GetCols(), CV_8UC3, rgbImage.GetData(), rowBytes);
    resize(cameraFeed, cameraFeed, Size(800, 600));

    imshow("camera feed", cameraFeed);
    waitKey(10);
}

}
2015-06-11 05:22:48 -0600 commented question Detect Multiple LEDs and their flashing frequency

@LBerger you mean X axis time and Y axis contour center for all the frames? Pardon my ignorance does gravity center mean geometric center ? or something else?

2015-06-11 04:22:45 -0600 commented question Detect Multiple LEDs and their flashing frequency

@LBerger, can some filter be used to identify the frequency? like Gabor filter or Kalman filter. I just read introductory notes about these filters don't have much idea though.

2015-06-11 02:03:33 -0600 edited question Detect Multiple LEDs and their flashing frequency

C:\fakepath\code.pngimage descriptionimage description(/upfiles/14337421785288362.jpg)Hi All,

I am new to openCV. I am trying to detect the position and frequency of multiple LEDs using OpenCV. Kindly guide me how can I achieve the same? I couldn't use the HSV conversion method because there may be other lights brighter than LED as well. Here is the basic logic. 1. LEDs are flashing at predefined rate. My camera has been set to 90 fps and the LEDs have frequency of 90Hz, 45Hz, 30Hz and 15Hz ( these frequencies and camera frame rate are known parameters ) 2. Now I need to find the location of these lights within camera frame in any lightning condition. Be it night where the light is the brightest in the room or be it sunlight where it may not be the brightest object in the scene.

I would appreciate the help.

2015-06-11 01:49:18 -0600 commented question Detect Multiple LEDs and their flashing frequency

@LBerger I wrote attached code just to detect the contours in each Threshold Image. I decided to work on static images first then I would work with video. Up to this point contours from all the frames (as you suggested 270 frames) is stored in vector. Now kindly guide me how can I check for the value of pixel at each contour? then select the contours of interest from this list.

2015-06-11 01:49:18 -0600 received badge  Enthusiast
2015-06-08 08:06:11 -0600 commented question Detect Multiple LEDs and their flashing frequency

Yes it may be coming from the window... or entire setup may be outside... where there are many illuminations with or without frequency. I will go through the sample you mentioned and will ask for help in case of confusions. :). This sample is too complicated for me to understand on my own. Is there any URL that explains it in detail?