2019-08-20 13:21:24 -0600 | received badge | ● Notable Question (source) |
2018-05-22 09:32:13 -0600 | received badge | ● Popular Question (source) |
2018-05-02 07:09:12 -0600 | received badge | ● Popular Question (source) |
2015-07-27 20:50:33 -0600 | commented answer | Infrared Led tracking with frequency Hi, I am working on similar project. Did you figure out the solution? |
2015-07-27 20:35:48 -0600 | asked a question | exclude moving objects in camera frame I have some LEDs which are blinking at a frequency half the frame rate of camera. Which means ideally in one frame LED is on and in another frame the LED is off. I need to find the positions of LED lights in camera feed. I capture N number of frames in grayscale mode, compute absolute difference between the frames, threshold the resulting image and check for contours. This algorithm works fine when there is no object moving in the camera feed. If some object is moving this algorithm fails. Can any one please suggest what can I do to exclude moving objects in this algorithm? If there is another way ( completely different and better ) to detect the position of LED lights please suggest that as well. Here is my code to detect the LED positions in camera frame. ( where framesToProcess is N number of frames captured quickly and contourCenter is indexed Cartesian coordinates that are candidate for LED points.) } |
2015-07-20 11:39:06 -0600 | commented question | difference between blur and dilate Actually i am using blur operation to smoothen the image. Strangly gpu::blur is taking more time than c::blur hence i was planning to replace it with dilate. But couldnt exactly figure out their effect on same image. |
2015-07-20 11:25:20 -0600 | commented question | difference between blur and dilate @sturkmen yes. Can you please explain what do they do to an image differently? In layman's term |
2015-07-20 08:49:26 -0600 | asked a question | difference between blur and dilate can anyone please explain the difference between blur and dilate ? mathematical as well as layman explanation would be really nice |
2015-07-20 04:27:24 -0600 | asked a question | blur taking significant time on GPU Here is the CPU version of a function This function takes about 1.5 seconds to execute for 30 grayscale images. Now I optimized this code for GPU } The GPU version of code takes about 0.5 second to execute when the function gpu::blur is commented but if this line is uncommented it takes more than 30 seconds or sometimes more ( i don't have that much patience so i kill the process ) . Can anyone point what is the problem with this code? Thank you in advance. |
2015-07-19 09:54:08 -0600 | commented question | Opencv android warpprespective Here is the code that reads repImage from drawable written inside onCameraViewStarted |
2015-07-19 08:18:13 -0600 | asked a question | Opencv android warpprespective Here is my c++, opencv code, which is basically overlaying an image at specified position on cameraFeed. Here imagePoints and newLEDPoints are two vectors containing exactly four Cartesian points in proper order. I am trying to achieve similar effect in android, opencv here is the code I am just getting repImage on screen when I run this on my android tablet. I actually want repImage to appear on cameraFeed at stipulated co-ordinates ( as my c++ code does ) Kindly guide me what am I missing ? or what am I doing wrong? |
2015-07-15 22:40:40 -0600 | received badge | ● Student (source) |
2015-07-15 06:53:01 -0600 | commented question | play video within video @Lorena GdL this is part of an AR solution which needs to do perspective transformation. I identify the coordinates of markers in Live camera feed (newLEDPoints in code) and need to display additional information ( image replace1.jpg) at the position defined by markers. This is working fine. Now I would like to play a video within live camera frame at position defined by vertices of rectangle stored in newLEDPoints. I have added image in the question for clarity As you can see four points have been identified in Live Camera Feed and an image has been displayed after perspective transformation. If I would like to play a .avi file instead of displaying simple image on top of live camera feed, how can I do so? |
2015-07-15 03:22:39 -0600 | asked a question | play video within video Here is my code which overlays an image ( replace1.jpg) on live camera feed. how can I play a video ( say .avi ) file in the specified area ? or what if I would like to play a sound ? If I would like to play replace1.avi instead of displaying replace1.jpg how can it be possible? |
2015-07-11 09:13:12 -0600 | commented question | Find Point Distance so you know the coordinates of red spot? is that line parallel to y axis ( the red line passing through red spot which represents the distance) ? or you know the angle at which the red line intersects black line? this is not the opencv or programming question, this is an algebra question where you need to find some logic to find the intersecting points ( of red and black line ) then you can simply compute the distance between the points ( in turn line ). |
2015-07-08 07:16:00 -0600 | received badge | ● Supporter (source) |
2015-07-08 05:39:07 -0600 | commented question | getprespectivetransform or alternatives which one is better I need exactly what happens at matrix level, so far couldn't find it over google. |
2015-07-08 04:46:03 -0600 | commented question | getprespectivetransform or alternatives which one is better @thdrksdfthmn thanks for the explanation. Can you please explain how transmix (in my code) is computed in matrix format? or provide some URL which explains the calculation in simple form with an example. |
2015-07-08 03:53:32 -0600 | asked a question | getprespectivetransform or alternatives which one is better I have identified a rectangular area in my image space ( programmetically I have fuor points in a vector representing vertices of a rectangle or quadrilateral ) in live camera feed. The shape is unknown in advance but it is known that it is a polygon with 4 vertices. I would like to display an image within that area. Here is my code. This is working fine. Here I would like to know what is being done by getPrespectiveTransform ( would really appreciate a sample matrix calculation ) and wrapPrespective ? I went through openCV documentation many times but got lost in many alternatives and generic explanation. Can I achieve exactly same functionality using findHomography() ? what would be the difference ? Thanks in advance. |
2015-07-08 03:40:06 -0600 | commented question | waitkey alternate @berak thanks for the info. Can you please mark it as answer. |
2015-07-07 08:58:32 -0600 | asked a question | waitkey alternate My code is already slow and processing takes some time once some frame is captured and processed from camera. When I use imshow ( "name", processedframe ). i don't want to wait even for 1 ms and continue to next loop. If I don't include waitkey(somevalue) after imshow the window is gray. Is there any solution for this? or i must use waitkey to display the feed correctly. |
2015-07-07 02:18:22 -0600 | commented question | LED Blinking Frequency @LBerger can you please suggest how to capture gray image directly from camera in my context. basically I would like to optimize following code |
2015-07-06 04:55:33 -0600 | asked a question | identify cluster I have following vector The vector contains vertices of a rectangle and some neighboring points of vertices. I need to extract the rectangle vertices from these points. means for Point ( 100, 200 ) and ( 101, 102 ) I just need one of them. Then for Points ( 200, 200 ), ( 201, 202 ) , ( 203, 204 ) I just need one point ( may be average or center of these neighbors ) and so forth. It may be a triangle with similar distribution or just a line with two groups or a point with a single group. Kindly guide me how can I achieve this? Should I use Kmeans or if yes how? if not is there any other clustring algorithm to solve this issue. |
2015-07-04 04:30:11 -0600 | commented question | LED Blinking Frequency @LBerger I want to quickly capture N frames and process and display the co-ordinates back in live feed. The circle is the identified spot of LED. Function getThresholdImage(framesToProcess, thresholdImages, differenceImages); is not working as I want. There appears a faint image in difference image which is suppressed when I compute threshold and Blur. Improvements noted:
If I don't store the captured images there will be a delay in processing each frame which in turn will delay capturing image. Hence I tried capturing 30 frames first then do other I don't want to write anything to all thirty frames of frameToProcess, I want to spot LED light in live feed once it is identified in framesToProcess |
2015-07-04 03:15:58 -0600 | asked a question | LED Blinking Frequency C:\fakepath\frame12.jpgI decided not to update my old post because it already has so many comments. Here is the program I wrote to detect blinking LEDs. It works so so when the surrounding is a bit dark and doesn't work at all when its bright out there. I have been given some suggestion to improve the efficiency like pre allocating but I think I need to work on the logic as well. Kindly guide me how can I detect the position and of blinking LED? Camera frame rate is 90fps and blinking frequency is 45Hz and there are more than one LEDs in the frame. Attached are two frames in a bright light condition. here is the logic 1. Setup camera parameters to make it 90fps 2. Quickly capture 30 frames and compute the difference and threshold of difference of the frames 3. Find contour centers in the the threshold image 4. organize contours in a R*tree and check the frequency of contour centers in user defined neighborhood. 5. If the count falls within the frequency and tolerance range. Predict the point to be LED light. Kindly guide me to modify this code so that it works in bright light conditions and the success rate of detecting LED is high.C:\fakepath\frame11.jpg(/upfiles/14359976621521109.jpg). As suggested the question seems to be too long. I am trying to get the difference between two frames and threshold the difference, check for contours and then check the frequency of contour center to detect the light. Following function accepts N number of images and does as explained. I need this to work in all light scenario, it is working in low light environment only. Kindly guide me how can I modify the code to make it work in any scenario. |
2015-07-02 10:46:01 -0600 | received badge | ● Scholar (source) |
2015-07-02 09:23:46 -0600 | commented question | Quickly capture N frames and continue with live feed Thanks a lot guys. @LBerger as your solution of clone worked perfectly. Would you please add it as answer to this question. If needed I would post a new question. |
2015-07-02 06:35:14 -0600 | commented question | Quickly capture N frames and continue with live feed |
2015-07-02 04:49:01 -0600 | commented question | Quickly capture N frames and continue with live feed currentFrame is added to the vector. does the vector only contain the reference as well? This works perfectly fine if I include my code to process each frame within for loop. but this will delay the image capture. What can be the workaround for this? |
2015-07-02 04:34:46 -0600 | commented question | Quickly capture N frames and continue with live feed Here is the runtime error I get First-chance exception at 0x00007FFF1EB42262 (opencv_core2410d.dll) in LearnCPP11.exe: 0xC0000005: Access violation reading location 0x00000028F52A2840. Unhandled exception at 0x00007FFF1EB42262 (opencv_core2410d.dll) in LearnCPP11.exe: 0xC0000005: Access violation reading location 0x00000028F52A2840. |
2015-07-02 04:30:51 -0600 | asked a question | Quickly capture N frames and continue with live feed I would like to capture N number of images process them and do something with live feed from the camera. I start camera, capture 30 frames and store them in a Mat vector. Now when I try to access or process the vector I am getting runtime error. I am using pointgray camera. I think I am missing something really basic. Kindly guide me what could be wrong? |
2015-06-11 05:22:48 -0600 | commented question | Detect Multiple LEDs and their flashing frequency @LBerger you mean X axis time and Y axis contour center for all the frames? Pardon my ignorance does gravity center mean geometric center ? or something else? |
2015-06-11 04:22:45 -0600 | commented question | Detect Multiple LEDs and their flashing frequency @LBerger, can some filter be used to identify the frequency? like Gabor filter or Kalman filter. I just read introductory notes about these filters don't have much idea though. |
2015-06-11 02:03:33 -0600 | edited question | Detect Multiple LEDs and their flashing frequency C:\fakepath\code.png(/upfiles/14337421785288362.jpg)Hi All, I am new to openCV. I am trying to detect the position and frequency of multiple LEDs using OpenCV. Kindly guide me how can I achieve the same? I couldn't use the HSV conversion method because there may be other lights brighter than LED as well. Here is the basic logic. 1. LEDs are flashing at predefined rate. My camera has been set to 90 fps and the LEDs have frequency of 90Hz, 45Hz, 30Hz and 15Hz ( these frequencies and camera frame rate are known parameters ) 2. Now I need to find the location of these lights within camera frame in any lightning condition. Be it night where the light is the brightest in the room or be it sunlight where it may not be the brightest object in the scene. I would appreciate the help. |
2015-06-11 01:49:18 -0600 | commented question | Detect Multiple LEDs and their flashing frequency @LBerger I wrote attached code just to detect the contours in each Threshold Image. I decided to work on static images first then I would work with video. Up to this point contours from all the frames (as you suggested 270 frames) is stored in vector. Now kindly guide me how can I check for the value of pixel at each contour? then select the contours of interest from this list. |
2015-06-11 01:49:18 -0600 | received badge | ● Enthusiast |
2015-06-08 08:06:11 -0600 | commented question | Detect Multiple LEDs and their flashing frequency Yes it may be coming from the window... or entire setup may be outside... where there are many illuminations with or without frequency. I will go through the sample you mentioned and will ask for help in case of confusions. :). This sample is too complicated for me to understand on my own. Is there any URL that explains it in detail? |