This forum is disabled, please visit https://forum.opencv.org

2020-06-29 18:25:42 -0600 | received badge | ● Popular Question (source) |

2019-02-25 07:29:44 -0600 | received badge | ● Popular Question (source) |

2013-07-29 19:38:12 -0600 | asked a question | How to find average intensity of opencv contour in realtime I have a image with about 50 to 100 small contours. I wish to find the average color of each of these contours in real-time (about 25, (960 x 480) pixel images per sec). Some of the ways I could think of was - Draw contour with FILLED option for each contour; use each image as a mask over the original image, thus find the average. But I presume that this method won't be real-time at first glance.
- Study OpenCV implementation of drawContour function and study their implementation of drawContour with the FILLED option. But the code seems really complex and not readily understandable.
- Calculate the minimum area rectangle, find all the points inside the rectangle using transformation, and find average of points that are non-zero. Again, seems complex approach.
Is there an easier, efficient way to do this? |

2013-06-21 13:12:57 -0600 | asked a question | polylines function in python throws error I'm trying to draw an arbitrary quadrilateral over an image using the polylines function in opencv. When I do I get the following error I call the function as like so, Where points is as numpy array as shown below (The image size is 1280x960): and img is just a normal image that I'm able to imshow. Currently I'm just drawing lines between these points myself, but I'm looking for a more elegant solution. How should I correct this error? |

2013-06-21 02:49:34 -0600 | received badge | ● Student (source) |

2013-06-20 19:33:57 -0600 | asked a question | Inverse Perspective Mapping -> When to undistort? BACKGROUND: I have a a camera mounted on a car facing forward and I want to find the roadmarks. Hence I'm trying to transform the image into a birds eye view image, as viewed from a virtual camera placed 15m in front of the camera and 20m above the ground. I implemented a prototype that uses OpenCV's warpPerspective function. The perspective transformation matrix is got by defining a region of interest on the road and by calculating where the 4 corners of the ROI are projected in both the front and the bird's eye view cameras. I then use these two sets of 4 points and use getPerspectiveTransform function to compute the matrix. This successfully transforms the image into top view. QUESTION: When should I undistort the front facing camera image? Should I first undistort and then do this transform or should I first transform and then undistort. If you are suggesting the first case, then what camera matrix should I use to project the points onto the bird's eye view camera. Currently I use the same raw camera matrix for both the projections. Please ask more details if my description is confusing! |

2013-06-20 19:23:45 -0600 | received badge | ● Supporter (source) |

Copyright OpenCV foundation, 2012-2018. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.