2020-11-14 01:51:21 -0600 | received badge | ● Popular Question (source) |
2018-09-14 18:14:56 -0600 | commented question | Weird result from MorphologyEx I uploaded a sample image and the actual code I used. Please help! |
2018-09-14 18:14:27 -0600 | edited question | Weird result from MorphologyEx Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way. |
2018-09-14 14:09:45 -0600 | asked a question | Weird result from MorphologyEx Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way. |
2018-09-14 14:09:14 -0600 | asked a question | Weird result from MorphologyEx Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way. |
2018-09-14 14:09:13 -0600 | asked a question | Weird result from MorphologyEx Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way. |
2017-12-13 07:02:47 -0600 | received badge | ● Notable Question (source) |
2017-10-16 13:27:32 -0600 | asked a question | 0 running time in GPU methods 0 running time in GPU methods Hi, I am doing GPU performance test and measuring processing time of some general methods |
2017-04-03 18:45:51 -0600 | received badge | ● Taxonomist |
2016-08-24 12:01:12 -0600 | received badge | ● Popular Question (source) |
2016-03-23 14:24:54 -0600 | asked a question | Question about efficient mesh warping Hi, I am doing camera de-calibration using barrel distortion. I have two of 2D point arrays, beforeGrids and afterGrids. beforeGrids are points of distorted grid and afterGrids are just straight lines. Below is my result: It looks good. But I am looking for a faster way to do the mesh warping. Here is my current dewarping code: The problem of this mesh warping method is that it needs to call AffineTransform cols * rows * 2 times which makes it not fast. I have done everything I could do for reducing processing time for using AffineTransform such as setting ROI and reusing objects, but I am still in trouble. so, I am looking for a faster and smarter way to do mesh warping rather than calling AffineTransform hundreds times. OpenCV already provides functions like remap() and CalibrateCamera() but I cannot use them because these functions need several camera distortion coefficients and I don't have the data. That's why I used Barrel Distortion which only needs one parameter. It will be great if someone can tell me how to use remap() and CalibrateCamera() functions using points from Barrel Distortion. Or please give me better idea of doing faster mesh warping. |
2016-03-22 11:49:10 -0600 | received badge | ● Enthusiast |
2016-03-21 11:51:53 -0600 | commented answer | Sphere distortion / barrel grid algorithm? Thanks! Much appreciated! |
2016-03-18 23:40:18 -0600 | commented answer | Sphere distortion / barrel grid algorithm? Thanks! What kind of values should I put in k? |
2016-03-18 19:26:29 -0600 | asked a question | Sphere distortion / barrel grid algorithm? The below image is an example of sphere distortion in Photo Shop. Pleasesee below image. When I changed the parameter 0 to 50 to 100, the 2D grids are changing. So basically this is what I want to achieve. Below is my current code: The theory of my code is that I draw virtual semi-sphere over my image and calculate distance shift amount based on the first distance from the center. Hope you can understand my function. But the result from this method looks like this: Mine does not look like a sphere :( Does anyone knows how to draw sphere grids with a strength parameter? |
2016-03-17 16:04:30 -0600 | received badge | ● Scholar (source) |
2016-03-17 16:04:30 -0600 | received badge | ● Supporter (source) |
2016-03-17 16:04:15 -0600 | commented answer | Max value of TemplateMatching without normalization Thanks! It really helped |
2016-03-16 16:14:57 -0600 | asked a question | Max value of TemplateMatching without normalization Hi, I have been using template matching for my work a lot and I know the template matching method returns "the best match" over the whole image even though there is no such shape. And if I normalize the map, the max value always hikes to the max value even though its confidence is very bad. So I am looking for a way to calculate the max value that template matching calculation can possibly make. (It's different from max value of the map) Let's say template image size is 10x10 with gray scale. What is the max value of the map which is created by template matching method? I am using TM_CCOEFF_NORMED but I am open to other methods too depends on how easy calculating max value is. |
2015-11-02 19:01:12 -0600 | commented question | C# haar cascade can't read xml Seems that old cacade xml and recent cacades have different format.. |
2015-11-02 18:51:58 -0600 | commented question | C# haar cascade can't read xml It's not OpenCV's official wraper but it's in NuGet and widely used. |
2015-11-02 18:20:23 -0600 | asked a question | C# haar cascade can't read xml Hi, I am having a weird problem and struggling with it now.. Above is my code. And this is really basic cascade loading function which I could find many places. But this code returns an error:
I tried other xml files but still getting the same error.. Why did it happen and how do I fix it? Please help!! |
2015-11-02 18:14:32 -0600 | commented question | Haar Cascade training outOfMemory error.. help!! Thank you! |
2015-11-02 16:06:51 -0600 | answered a question | Broken understainding - creating a classifier Have you found a solution for this? I am also getting broken text in my output xml files.. |
2015-11-02 11:32:55 -0600 | asked a question | Haar Cascade training outOfMemory error.. help!! Hi, I need to find patterns in manually set ROI. The patterns are not complicated so I decided to use Haar Cascade Pattern Detection. In order to do that, I needed to train my samples first using "opencv_traincascade.exe" so I grabbed 38 positive images and a few negative images. For the positive images, I tried two different sizes, 100 x 64 and 50 x 32, to train. But everytime I try run my opencv_traincascade.exe, it returns with outOfMemory error: "OpenCV Error: Insufficient memory error (failed to allocate 3.8GB) in cv::OutOfMemoryError, file: c:\builds\2_4_PackSlace-win32-vc12-shared\opencv\modules\core\src\alloc.cpp line 52" When I tried two difference positive sample sizes, the memory size in the error message was exactly same so I don't think it's sample size. The command I entered in my cmd was: opencv_traincascade -vec positive.txt -bg negative.txt -data output -featureType LBP Why does this happen and how do I fix it? Please help!! |
2013-11-14 16:42:57 -0600 | asked a question | Question about FlannBasedMatcher Hi, I am trying to train a set of patterns and find a match within a test image. That being said, I have many descriptors from train data set: As you can see, the DMatch objects in the vector are from different trained image - different imageIdx. As the accuracy is still not very good, I want to try Homography estimation but I don't know how to do it with that kinds of result matches. Only Homography example I have is working on 1 train image and 1 test image. Can you give me some advices for implementing Homography estimation in this situation? What else can you think of to improve accuracy as post processes? |
2013-11-14 16:41:50 -0600 | asked a question | Question about FlannBasedMatcher Hi, I am trying to train a set of patterns and find a match within a test image. That being said, I have many descriptors from train data set: cv::Mat descriptor1; cv::Mat descriptor2; cv::Mat descriptor3; cv::Mat descriptor4; cv::Mat descriptor5; //put all train set descriptors in a vector std::vector<cv::mat> descriptors; descriptors.push_back(descriptor1); ... descriptors.push_back(descriptor5); //add and train FlannBasedMatcher matcher; matcher.add(descriptors); matcher.train(); //match cv::Mat descriptorTest; matcher.knnMatch(descriptorTest, m_knnMatches, 2); //ratio test to get good matches std::vector<cv::dmatch> matches = ratioTest(m_knnMatches); // the result matches after ratio test contains many DMatch for example: DMatch (queryIdx: , trainIdx: *, imageIdx: 1, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 2, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 0, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 1, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 4, distance: *.*} As you can see, the DMatch objects in the vector are from different trained image - different imageIdx. As the accuracy is still not very good, I want to try Homography estimation but I don't know how to do it with that kinds of result matches. Only Homography example I have is working on 1 train image and 1 test image. Can you give me some advices for implementing Homography estimation in this situation? What else can you think of to improve accuracy as post processes? |
2013-06-21 16:01:47 -0600 | asked a question | tvl1 optical flow not working Hi, I connected an web cam to the computer and I am trying to draw optical flow points on a window. Camera works and frames are being saved well. But DenseOpticalFlow.calc(...) keeps stuck in infinite loop. Here is my code: What is wrong in my code? |
2013-06-13 12:30:48 -0600 | commented answer | Finding area center of rectangle Thanks for the link. I have implemented the Centroid of polygon in your link and even posted my code in the page. However it also has chance to be placed outside of polygon.. :( |
2013-06-12 16:09:31 -0600 | commented answer | Finding area center of rectangle And I am pretty sure the midpoint will be placed outside of rectangle in the rectangle #6 in the image as well as it takes too long. |
2013-06-12 16:06:57 -0600 | commented answer | Finding area center of rectangle Shouldn't it be cgix=xsum/total_num_nonzero; cgiy=ysum/total_num_nonzero; ? |
2013-06-12 15:55:02 -0600 | commented answer | Finding area center of rectangle that's not gonna work. Equally shaped rectangle may have different midpoint depending over its rotation. |
2013-06-12 15:15:30 -0600 | commented answer | Finding area center of rectangle Thanks for the answer. But I don't quite understand what you are trying to say. Can you write some lines of pseudocode please? |
2013-06-12 14:25:34 -0600 | asked a question | Finding area center of rectangle I am trying to find an area center of various types of rectagles. (Center of gravity and midpoint of 4 vertices never work so please think in different way) Please see this image: I have to find the position of position of red dots I have to find areaCenter so (nearly) area(areaCenter, vertices[0], vertices[1]) = area(areaCenter, vertices[1], vertices[2]) = area(areaCenter, vertices[2], vertices[3]) = area(areaCenter, vertices[3], vertices[0]) I tried many different ways to find the mid point but none of them covered every types of rectangles. Can any1 give me some idea? |
2013-05-29 16:52:17 -0600 | asked a question | Best way of masking on warpAffine? Hi, Just like using mask when copying image: I just want to apply warpAffine only on a particular region because I only need a small part of image for example mouth. But existing warpAffine methods do not seem to accept any masks. Therefore I need to find a smart and easy way to do masking warpAffine method in order to reduce running time. Has anyone here thought about it before? Please give me some tips! |
2013-05-27 17:10:19 -0600 | asked a question | need a help on MatOfPoint2f I am rewriting WarpAffine related code in Android. But I can hardly get how to use MatOfPoint2f. My previous code is like: (using OpenCVSharp) In Android, the code should look like this: How do I setup the position numbers in MatOfPoint2f? |
2013-04-22 16:08:17 -0600 | asked a question | Detect concave using 4 points of a rectangle? Hi, I have 4 points of a rectangle, and I am trying to check if it has a concave in it. CvPoint[] points = new CvPoint[4]; points[0] = new CvPoint(10,10); points[1] = new CvPoint(10,20); points[2] = new CvPoint(13,13); points[3] = new CvPoint(20,10); There are several ways I can think of but none of them is useful and smart in terms of speed and memory. Anyone knows the best way to check if it has a concave? |
2013-04-22 12:03:50 -0600 | answered a question | warpPerspective gives unexpected result Okay. I got it by myself. WarpPerspective() doesn't actually give us a right result when we want all contents in the image staying in a same relative position. WarpAffine is the answer. Only thing I think it's annoying is that it only transforms triangles. |
2013-04-21 22:04:13 -0600 | commented answer | warpPerspective gives unexpected result Dude. I attached 6 photos. Look at the upper right photo. Can't you see the four parts of the photo are discontinuous? And can't you imagine what the correct one should look like? Do you need more explanation on this? Assuming a transformation is made from (0,0)(10,0)(0,10)(10,10) to (0,0)(10,0)(0,10)(15,10) on the image. Because only the bottom right point moves right, the pixels in the original image should stretch towards right. The more stretch in the lower part. They must not stretch to neither left, down nor up. Tell me if you need more explanation here. However, in the transformed image which WarpPerspective() resulted looks like the 6th photo in the attached. Please find the red horizontal grid I drew. The pixels in the original image actually shifted towards up! |
2013-04-19 19:26:18 -0600 | edited question | warpPerspective gives unexpected result Hi guys, I am implementing "Quad Warping" in order to do fat/skinny face effect. Quad Warping is quite much work but I don't know if there is any other options to do the face effect things. Anyways, I sort of completed writing code but the result image seems not right and I wonder if it is a bug in OpenCV. Please see the image below: Can you guys notice the pixels have been rotated up even though size of resulting rectangle is right? Is this a bug? or a right attribute of WarpPerspective() ? |