2017-08-08 00:48:33 -0500 commented question Determining the pattern orientation of a spatiotemporal image Where is the pattern in the last image? Are those white lines on the right a part of the pattern as well? 2017-08-07 09:05:52 -0500 answered a question Determining the pattern orientation of a spatiotemporal image Depends on what kind of pattern it is - if it's always similar to the one you've showed here (i.e. multiple thin lines), you can try Hough transform - detect all the lines and then just average them. In openCV, detected lines are described with two points, so just average all the first points describing the lines, and then do the same with the second points. Then it's enough to either visualize it or calculate the horizontal angle depending on what kind of results you are expecting. 2017-08-07 05:24:28 -0500 answered a question Extract Curves Contours You can use the k-means algorithm to segment the curves directly. You just have to experiment a little with the number of clusters you're gonna use because you have to reserve some clusters for the color of the font, grid, axes and plot backgrounds, and also GUI elements such as these buttons with magnifying glass and hand. I think 11 clusters should do the trick (1 for each of the 4 curves and 7 for the plot elements listed above). Observe the results and try adding more or remove some of them if you don't get the results you want. This solution could be fully automated because I presume that all the pictures you are going to analyze would be identical in terms of the interface and only the curves change, so the number of pixels constituting each of the other 7 clusters should be very similar on every picture (number of points constituting the curves would probably also be similar for that matter), so you should be able to identify your curves just by analyzing the number of points belonging to each cluster. The only problem you may have is when the curves are of the same color as some of the interface elements, e.g. if one of them is of the same color as the font, but it's easy to filter out the unwanted points in such a case, because you may analyze the position of the points belonging to the cluster - if they are near the edges of the image, then it's an interface element. These links can get you started along with the docs I've linked above: http://www.pyimagesearch.com/2014/05/... http://www.pyimagesearch.com/2014/07/... Happy birthday ;) 2017-08-03 15:25:22 -0500 commented answer Minimaze window cv2.imshow() I'm not really familiar with python, so I can't answer that. Typical solution in C++ involves integrating with Qt. Qt is also available for python, so maybe you should check if it's possible for this language as well. 2017-08-03 09:21:54 -0500 commented question Applying a particular texture in an image ROI In general there is no single answer. 2017-08-03 08:30:51 -0500 answered a question Minimaze window cv2.imshow() It seems it's not really possible. As you've observed, there is no window property flag that would allow you to do that. By the way, if you use built-in openCV window functions for something more than fast checking the results of your algorithms, you should really use some other facilities as GUI (e.g. Qt integration) as the built-in highgui is really limited and not really suitable for any application that's to be shown to a normal user. This is my experience at least. 2017-08-03 08:23:30 -0500 commented question Applying a particular texture in an image ROI That's impossible to answer unless you provide more details. How is it going to be pasted into that contour? As a rectangle or stretched to fill the whole contour? If as a rectangle - where should this rectangle begin? Can it be cropped, if it doesn't fit as a whole, or scaled down? What is the forehead contour? It's not shown in the article? Is the texture going to be scaled or it is guaranteed that you may just paste the original size? 2017-08-03 08:11:37 -0500 commented question Applying a particular texture in an image ROI Is the texture always rectangular or it may be of various shapes? 2017-08-03 08:10:45 -0500 commented question Applying a particular texture in an image ROI Ok, and is the shape and size of contour and texture the same? 2017-08-03 08:00:10 -0500 commented question Applying a particular texture in an image ROI It depends. Is the image contour rectangular? Does it have the same shape and proportions as the texture you want to paste? 2017-08-03 07:41:36 -0500 commented answer Drawing HoughLins to a certain point What is the relation between pt1.y and pt2.y and between diagonalLine[i-1] and diagonalLine[i]? Is the former always bigger than the latter in case of both point pairs? Don't you need to check that in each case? 2017-07-27 09:00:09 -0500 received badge ● Nice Answer (source) 2017-07-27 08:34:35 -0500 answered a question Drawing HoughLins to a certain point I understand that you need two things: Detect if a vertical line intersects a diagonal one (and by diagonal we really mean neither vertical nor horizontal one) Calculate the point of intersection 1. Intersection detection Let v1 and v2 be the points that desribe the ends of a vertical line and d1 and d2 be analogical points of a diagonal one. Also let's assume v1.y > v2.y and d1.y > d2.y. Intersection takes place if v1.y > d2.y and in the same time v2.y < d1.y. So, for each vertical line you check, you should determine which point has greater Y (i.e. vertical) coordinate, do the same for the diagonal one, and then do the check as described above. 2. Finding intersection point You need to find lines equation, which is trivial since you have two points of the diagonal line and in case of vertical line equation is just x = d1.x . Put d1.x into diagonal line equation to find the intersection point. When you have it, just replace the Y coordinate of your vertical line with the Y coordinate of the intersection point. Voila - your vertical line ends at the point of intersection with the diagonal line now. 2017-07-25 15:13:33 -0500 commented question Simple 2d Ransac points filtring (iOS) One of the answers links to StackOverflow. Have you read about fitLine() there? It takes points, not an image. See the docs: http://docs.opencv.org/2.4/modules/im... 2017-07-24 05:29:57 -0500 commented question Simple 2d Ransac points filtring (iOS) Have you tried this? http://answers.opencv.org/question/46... 2017-07-24 05:29:42 -0500 received badge ● Citizen Patrol (source) 2017-07-24 05:29:33 -0500 received badge ● Organizer (source) 2017-07-18 15:57:47 -0500 answered a question OpenCV Python face detection fails on image with high levels of background light Try detailEnahnce() : 2017-07-18 15:55:46 -0500 answered a question OpenCV InputArray to std::vector There's no way of doing that directly unless it's vector - the interface of InputArray doesn't provide such functionality. If you really need to treat it as a vector, call getMat() or getVectorMat() and transform the resulting Mat object to a vector. If it is a vector, then you can use _InputArray::getMat(idx) to get idx-th Mat of the vector, so you can e.g. iterate that way through the vector elements. 2017-07-17 11:32:48 -0500 commented question Improving OpenCV performance when stitching 360° mosaics But why can't it just be pasted as a whole on the leftmost or rightmost part of the panorama? The center of 360 panorama is arbitrary. 2017-07-17 06:37:44 -0500 commented question GPUMat and Mat Correct your question as currently it's not clear what you are asking for. Also, what do you mean by kernel? 2017-07-14 16:37:19 -0500 commented question image pixel scaling in qt What's the problem and how is it related to OpenCV? 2017-07-14 16:03:44 -0500 commented question Create stitched image from known calibrated points I know it would be a waste of resources in theory, but unless you care about performance very much it might just be easier to implement, and in some cases ease of implementation > performance. Depends on the requirements and constraints - don't know what are yours. 2017-07-14 16:03:09 -0500 answered a question Create stitched image from known calibrated points I know it would be a waste of resources in theory, but unless you care about performance very much it might just be easier to implement, and in some cases ease of implementation > performance. Depends on the requirements and constraints - don't know what are yours. 2017-07-14 03:34:19 -0500 answered a question How to get around Overlapping Problem? Depends on your setup, but assuming that the objects will be falling more or less in the same distance/plane with respect to the mirrors/camera matrix (within the stream limited by e.g. the green lines in the picture), you can calibrate the camera easily with a two-sided chessboard (identical on each side) to determine the correspondence of pixels between between both views. Then you compare the corresponding areas - if you see only one object at one view, but two in the other, you know that you have in fact two objects overlapping. This approach will be harder to implement if the stream of objects is very wide, i.e. the distance to the camera matrix varies greatly with respect to the objects' size (e.g. blue lines in the picture). If you also try to detect seeds, you can also try to detect overlaps by analyzing height to width ratio of the bounding box. Overlapped seeds will typically have height to width ratio that is much closer to 1. Depending on your setup, speed and dosing of falling objects, this may actually be quite accurate itself and you may not even need a second view to count accurately. This approach may work regardless of the width of the stream. 2017-07-14 02:41:42 -0500 commented question Head pose estimation fails with specific image sizes Are you interested only with rotation angles without detecting face direction, or do you want the side towards the face is directed as well? 2017-07-14 02:32:56 -0500 commented answer fog removal C++ code No problem. You may upvote and accept the answer (the 'tick' sign on the left right under voting buttons) if you are satisfied with this answer. 2017-07-13 12:41:12 -0500 commented question Create stitched image from known calibrated points Have you tried just using the standard stitching algorithms? I understand that you want to explicitly use these calibration points, but I think the usual algorithm will also find them just fine as they are very characteristic. 2017-07-13 11:12:36 -0500 commented answer Shape alignement and differences computation You can also use it for point 1 - just align it with respect to the ideal shape, such as the one you show on the left. 2017-07-13 11:11:23 -0500 commented question Improving OpenCV performance when stitching 360° mosaics I don't understand why this image had to be cut in half and not used as a whole. 2017-07-13 11:04:23 -0500 commented answer image ROIs in python Right, forgot to write that. 2017-07-13 10:35:06 -0500 answered a question fog removal C++ code The problem is that m_DarkChannelImage is of type CV_64F, because you assign to it one of the planes, which holds the results of splitting the input_image, which is CV_64FC3. If you want to access its pixels, you have to call at(), whereas you call at(), which is only suitable for CV_8UC type. 2017-07-13 03:28:31 -0500 commented answer image ROIs in python I'm not saying the only way to do it is through trial and error - automatic detection is just not described in this chapter. There are many techniques that let you detect objects, but there is no general rule how to do it and which method you choose depends on the type of object, picture quality, color model used and many other factors. I think you got downvoted because people think you could've found answer to your question easily in the web and you didn't show the minimum effort to do it. I agree with them regarding the first part of your question - it's easy to google it. 2017-07-13 02:55:27 -0500 answered a question image ROIs in python As for the first question - 280:340, 330:390 means: get a rectangle that begins at 280th row and 330th column and ends at 340th row and 390 column. Another way to put it: it is a rectangle described by 4 vertices whose coordinates are: (280,330), (280,390), (340,330), (340,390). In general, this 'i:j' means range from i-th to j-th element. As for the ball: if you look closely, the author doesn't care about the exact position of the ball - it's been copied together with the grassy background. How did the author know where is the ball? Well, by looking at the image - there is no automatic detection here if this is what you are asking about. The tutorial states that the ball was selected (manually), not detected. 2017-07-13 02:43:36 -0500 commented question fog removal C++ code It always helps both you and us if you write which line causes this exception. 2017-07-13 02:41:36 -0500 answered a question Shape alignement and differences computation The ICP algorithm might be of use to you: Try fitting your scanning results to the template. There is also a 3D version of this algorithm. 2017-07-12 08:26:02 -0500 commented question Shape alignement and differences computation Are you operating on 2d or 3d data? 2017-07-12 02:48:23 -0500 commented question How a dictionary is loaded in C++ OpenCV Program? Yeah, me too, but the potentially troublesome part is to load the data from the .mat file in general, not loading it to cv::Mat specifically. Also, based on the question, I'm not even sure if the OP doesn't mix .mat file with cv::Mat and if the data he wants makes sense in the openCV context. 2017-07-11 10:54:08 -0500 commented question How a dictionary is loaded in C++ OpenCV Program? Which problem? :D 2017-07-11 10:34:51 -0500 commented question How a dictionary is loaded in C++ OpenCV Program? You have to adjust the code to suit your needs, there is no generic code. Google opening Matlab files and binary files in C++ - there's a plenty of material in the web. And it doesn't have much to do with openCV by the way, so it's the wrong place to ask these questions.