2016-11-03 17:02:31 -0600 | answered a question | Video Stitching of overlapped videos Here is an example using frames rather than time; hope this helps. |
2016-11-03 07:53:09 -0600 | commented question | Video Stitching of overlapped videos I think you have lost me when they need to swap between videos, but you can use multiple cap1=cv2.VideoCapture("video1.mpeg"). Are you trying to detect when they are "similar" and then perform the swap? This would be dependent on how similar they are and what features you can use. |
2016-11-03 05:11:43 -0600 | commented question | Rotation of word and cropping Do you know what the size of the characters you are interested in? From my experience it seems to get the best results is to detect something that the text is next to that has strong features and then apply Tesseract to a known offset; does not mean this will work for you. Alternatively detecting the text might work better; it depends what type of artifacts you expect. |
2016-11-03 05:04:36 -0600 | commented question | error python Can you slowly take out all the lines of code that are different from a standard example; until it is something like this and working http://docs.opencv.org/3.0-beta/doc/p.... Then we can see which part it does not like. |
2016-11-03 04:56:43 -0600 | commented question | Video Stitching of overlapped videos Okay I see now, the x axis is time. So will the end of video1 have the same frames as the start of video2? Then what you want to do is swap between video1 to video2 then video3 seamlessly? |
2016-11-03 03:40:42 -0600 | commented question | how to identify noise in an image Its a bit vague, do you want to count how many pixels are noise? Are you trying to remove noise? Do you want to classify images in bands, like this image has low, medium or high amounts of noise? |
2016-11-03 03:35:01 -0600 | commented question | Video Stitching of overlapped videos There is a lot of content above, are you after creating a panorama? |
2016-11-02 19:44:19 -0600 | commented question | Gstreamer missing plugin Thanks for that, I have not used docker before but I will learn to do so. |
2016-11-02 07:12:28 -0600 | commented question | Gstreamer missing plugin Did you end up solving this? I am going to be doing something similar and giving this a go. Do you mind going into more detail on what you are trying to achieve? |
2015-01-14 17:48:40 -0600 | commented question | how can i read all frames? I just ran your code on a video with total frames 2224 but only 2221 were processed. I tried a few things to reproduce or fix the error rate however I always got the same result. An alternative to make sure you are getting the right frame rather than cap >> frame, use cap.set(CV_CAP_PROP_POS_FRAMES, k); then use cap.retrieve(frame); this way you are telling the video to go to position k and retrieve the frame. |
2015-01-07 16:31:57 -0600 | commented answer | How to make a hue distance histogram ? As long as the function computeHueDistance does what is required; what I wrote was only an example. |
2015-01-05 22:25:47 -0600 | commented answer | How to make a hue distance histogram ? I know how to use the pointers off the top of my head and unfortunately not any other way. It was my quick attempt at implementing the formula; the change in the hue channel over the x and y axis. Below is a brief example what the array notation is equivalent to in matrix notation. To get a pixel at position (x,y), below is how to get the value at y-1; the pixel above. For a 3 channel image [((y-1)img_hsv.cols+3)+(x3)+0] => [x][y-1][0] //x position, y position, channel For a 1 channel image [((y-1)*img_hsv.cols)+x] => [x][y-1] // x position, y position If there is a better notation you like then use it and I am happy to help convert it to something you can understand. |
2015-01-05 01:18:56 -0600 | received badge | ● Teacher (source) |
2015-01-02 22:05:49 -0600 | answered a question | How to make a hue distance histogram ? The structure should be as follows: Load image -> convert to HSV -> implement the stated algorithm on the H channel -> calculate the histogram on the new image. I have quickly put some code together to demonstrate how I would do this; I hope it is what you are after. I have only implemented the basic structure where you can go through and add/tweak what you need. This is only a guide, if you need anything explained in more detail feel free to ask. |
2015-01-01 04:26:32 -0600 | received badge | ● Enthusiast |
2014-12-30 20:55:15 -0600 | answered a question | Canny (OCL, 3-beta) can not detect connected contour of black square on a white background. I just ran it through the sample code in this link and it worked fine for me. They used: Canny(imgGray, imgGrayEdges, lowThreshold, lowThreshold*3, 3); where lowThreshold is a value between 0 to 100. |
2014-12-29 08:06:43 -0600 | commented question | unable to detect face using example code (objectDetection.cpp) give in opencv 2.4.9 check that capture = cvCaptureFromCAM( -1 ); is the same as the webcam example you ran. Then directly afterwards grab a frame = cvQueryFrame( capture ); and view it immediately. If this does not work then copy and paste what worked when you loaded it from the webcam. |
2014-12-28 08:02:33 -0600 | commented question | How to make a hue distance histogram ? Distance of the hue is the same as an edge detection on the hue channel. If you don't want to use an edge detection you can apply the subtraction formula as you described above to the hue channel then take the histogram of the result. |
2014-12-28 03:18:38 -0600 | commented question | How to make a hue distance histogram ? I am confused what the question is exactly, but what I have gathered you first want to get a histogram of the hue values. To do this convert the image from BGR to HSV and CalcHist of the first channel; this will be of the hue values. |
2014-12-28 03:18:37 -0600 | answered a question | How to find the hsv value of red object in a particular image? The code can be found in this link For more information on HSV see this link Once you perform hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV), the image swaps from BGR to HSV.
|
2014-12-28 03:18:37 -0600 | commented question | unable to detect face using example code (objectDetection.cpp) give in opencv 2.4.9 Does the output window have any image; or is it black? If the image does not load then it might be an issue getting the camera to load. |
2014-12-07 18:07:48 -0600 | commented question | Building opencv 3.0-beta on linux. Something to note, make sure you are using 64bit; I had some issues when running on 32bit Ubuntu. |
2014-11-26 04:53:05 -0600 | answered a question | Help detecting/tracking circles with findContours -Find width and height of the contour with a bounding box, like this link bounding_rects. -Also find the area (A1) of the contour with this link contour_area -Using either width, height or an average, assume that is the diameter of your circle (D) where the radius (r = D/2) -Calculate the ideal area (A2) = Pi X r^2; this is the ideal area you are after. -Compare A1 and A2 with something like: circular = 1 - (abs(A1-A2)/A2) The circular value calculated should be a percentage how circular the contour is, if this number is high you can use width or height as the diameter, if not its not a circle. |
2014-11-25 21:53:41 -0600 | commented question | Merge multiscale images to get a better contour Really depends what you are trying to improve, do you find you have too many contours and want to cut down or not enough, or a combination? How has the image been filtered to get the contours? There could be many reasons why its not performing; the idea is to find the variation that is coursing the issue, then use merging to improve. Could be as simple as iterating through all the contours and keep the ones that appear on all 3 images. I hope this gives you some insight on potentially where to start. |
2014-11-25 16:35:18 -0600 | answered a question | Changing camera resolution in Python ret is the return value from the function set(). I assume if successful ret = 1, if unsuccessful ret = 0. The first parameter states what is being set, 3 = width and 4 = height. The second parameter states the value to be set to. |
2014-11-25 16:23:30 -0600 | commented question | How to determine the area covered by vehicles in a road I am a little confused on the goal you are trying to achieve. Is the task to find the distance traveled? |
2014-11-05 01:24:56 -0600 | answered a question | Installation Open CV 2.4.9 on ubuntu 14.04 LTS use sudo as seen below:
|
2014-09-30 20:16:35 -0600 | commented answer | Constant Time Random Access to Video Frames Using your notation it will be O.set(CV_CAP_PROP_POS_FRAMES, n), and you can call it at the start. From there you can either step through the video using n or as you would normally. |
2014-09-29 23:16:40 -0600 | answered a question | Constant Time Random Access to Video Frames Hi, I remember when learning OpenCV I did an example with the old C API that would allow the video to jump to a frame using the track bar; it used you can see the documentation under VideoCapture::set with this link and with C++ should look something like this: |
2014-09-29 23:05:05 -0600 | commented question | Is it normal for "new VideoCapture()" to take AGES? Hi, from me experience with IP cameras they do take awhile to connect with OpenCV, from memory it would take about 30 seconds. |
2014-09-20 07:11:16 -0600 | commented answer | Extracting features with unclean canny detection contours is vector<vector<Point>> contours[i] is vector<Point> contours[i][j] is Point so to access the value you will need to use contours[i][j].x and contours[i][j].y |
2014-09-18 17:47:06 -0600 | commented answer | Extracting features with unclean canny detection There is no easy way to extend the lines that I know of, you will need to go through the points of the contours and perform the operation, below is something I just googled to do so. Filtering contours can be a messy task because you have to compare different contours points. My suggestion would be to play around for a better convex hull result. for(size_t i=0; i<contours.size(); i++) { // use contours[i] for the current contour for(size_t j=0; j<contours[i].size(); j++) { //use contours[i][j] for current point }} |
2014-09-18 08:14:16 -0600 | commented answer | Extracting features with unclean canny detection Could you share the code you used? You are going to have to filter the points from the contours, I will see if I can give you some pointers. |
2014-09-18 01:30:23 -0600 | answered a question | Extracting features with unclean canny detection Hi, Depending what the original image is there may be better filtering to perform first to get a more solid result. Below is a brief answer to your questions.
this link is a tutorial on applying contours to an image after a canny edge detection. For more information on contours see this link
For this you might want to use a convex hull, see this link
This should occur in step two when you are selecting the contours. |
2014-09-17 22:46:22 -0600 | answered a question | How is the edge detection in this video achieved? I am unfamiliar with this product, but it does look like Canny and also using contours. To get repeatable results an external lighting source (LEDs) should be used to light the paper, or just be in a well lite room. If the lighting is correct the filtering should only require thresholding, erodes and dilates. There also might be some calibration to help like a background subtraction removing any imperfections before starting. |
2014-09-14 19:59:16 -0600 | answered a question | CV_CAP_PROP_SETTINGS working on opencvsharp not on opencv?? Hi, I am confused what you mean by configuration window; no window should come up that I am aware of. When using a capture.set() you are adjusting the camera settings where I grabbed some examples from link below: http://docs.opencv.org/trunk/doc/user_guide/ug_highgui.html • CV_CAP_PROP_FRAME_WIDTH – Frame width in pixels. • CV_CAP_PROP_FRAME_HEIGHT – Frame height in pixels. • CV_CAP_PROP_FPS – Frame rate in FPS. Hope this helps. |
2014-09-12 00:45:40 -0600 | received badge | ● Supporter (source) |