Ask Your Question

Papercut's profile - activity

2020-11-14 01:51:21 -0600 received badge  Popular Question (source)
2018-09-14 18:14:56 -0600 commented question Weird result from MorphologyEx

I uploaded a sample image and the actual code I used. Please help!

2018-09-14 18:14:27 -0600 edited question Weird result from MorphologyEx

Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way.

2018-09-14 14:09:45 -0600 asked a question Weird result from MorphologyEx

Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way.

2018-09-14 14:09:14 -0600 asked a question Weird result from MorphologyEx

Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way.

2018-09-14 14:09:13 -0600 asked a question Weird result from MorphologyEx

Weird result from MorphologyEx Hi guys, I am trying to use MorphologyEx to open the white area in ful horizontal way.

2017-12-13 07:02:47 -0600 received badge  Notable Question (source)
2017-10-16 13:27:32 -0600 asked a question 0 running time in GPU methods

0 running time in GPU methods Hi, I am doing GPU performance test and measuring processing time of some general methods

2017-04-03 18:45:51 -0600 received badge  Taxonomist
2016-08-24 12:01:12 -0600 received badge  Popular Question (source)
2016-03-23 14:24:54 -0600 asked a question Question about efficient mesh warping

Hi,

I am doing camera de-calibration using barrel distortion.

I have two of 2D point arrays, beforeGrids and afterGrids. beforeGrids are points of distorted grid and afterGrids are just straight lines. Below is my result:

image description

It looks good. But I am looking for a faster way to do the mesh warping. Here is my current dewarping code:

    public static IplImage Dewarp( IplImage image, double distortionParam )
    {
        // create 2D array of points
        CvPoint[][] gridsBefore = GetWarpedGrid( image.Width, image.Height, distortionParam );
        CvPoint[][] gridsAfter = GetStraightGrid( image.Width, image.Height );

        // make lists of point arrays. Each array has 3 points.
        List<CvPoint[]> trianglesBefore = GetTriangles( gridsBefore );
        List<CvPoint[]> trianglesAfter = GetTriangles( gridsAfter );

        // apply affine transform for each triangle (MESH WARPING)
        IplImage result = new IplImage( image.Size, image.Depth, image.NChannels );
        for( int i = 0; i < trianglesAfter.Count; i++ )
        {
            DrawAffineTransformedTriangle( 
                image, ref result, trianglesBefore[ i ], trianglesAfter[ i ] );
        }
        return result;
    }

The problem of this mesh warping method is that it needs to call AffineTransform cols * rows * 2 times which makes it not fast. I have done everything I could do for reducing processing time for using AffineTransform such as setting ROI and reusing objects, but I am still in trouble.

so, I am looking for a faster and smarter way to do mesh warping rather than calling AffineTransform hundreds times.

OpenCV already provides functions like remap() and CalibrateCamera() but I cannot use them because these functions need several camera distortion coefficients and I don't have the data. That's why I used Barrel Distortion which only needs one parameter.

It will be great if someone can tell me how to use remap() and CalibrateCamera() functions using points from Barrel Distortion. Or please give me better idea of doing faster mesh warping.

2016-03-22 11:49:10 -0600 received badge  Enthusiast
2016-03-21 11:51:53 -0600 commented answer Sphere distortion / barrel grid algorithm?

Thanks! Much appreciated!

2016-03-18 23:40:18 -0600 commented answer Sphere distortion / barrel grid algorithm?

Thanks! What kind of values should I put in k?

2016-03-18 19:26:29 -0600 asked a question Sphere distortion / barrel grid algorithm?

The below image is an example of sphere distortion in Photo Shop. Pleasesee below image.

image description

When I changed the parameter 0 to 50 to 100, the 2D grids are changing. So basically this is what I want to achieve.

Below is my current code:

    private static CvPoint GetShiftedPoint( CvPoint center, CvPoint point, double p = 1.5 )
    {       
                    // p == 1 it draws perfect sphere. As it grows, it should look like flat surface
                    // the bigger p, the longer radius
        double a = Math.Max( center.X, center.Y );          
        double r = a * p;
        double distance = center.DistanceTo( point );

                    // calculate height of sphere
        double theta = Math.Asin( a / r );
        double h = Math.Cos( theta ) * r;
        double hTop = r - h;

                    // calculate shift amount
        double y = r - ( hTop * distance / a );
        double rho = Math.Acos( y / r );
        double newDistance = Math.Sin( rho ) * r;

        double dx = point.X - center.X;
        double dy = point.Y - center.Y;
        double radian = Math.Atan2( dy, dx );

        double newDx = Math.Cos( radian ) * newDistance;
        double newDy = Math.Sin( radian ) * newDistance;

        int newX = ( int )Math.Round( center.X + newDx );
        int newY = ( int )Math.Round( center.Y + newDy );

        return new CvPoint( newX, newY );
    }

The theory of my code is that I draw virtual semi-sphere over my image and calculate distance shift amount based on the first distance from the center. Hope you can understand my function.

But the result from this method looks like this: image description

Mine does not look like a sphere :(

Does anyone knows how to draw sphere grids with a strength parameter?

2016-03-17 16:04:30 -0600 received badge  Scholar (source)
2016-03-17 16:04:30 -0600 received badge  Supporter (source)
2016-03-17 16:04:15 -0600 commented answer Max value of TemplateMatching without normalization

Thanks! It really helped

2016-03-16 16:14:57 -0600 asked a question Max value of TemplateMatching without normalization

Hi,

I have been using template matching for my work a lot and I know the template matching method returns "the best match" over the whole image even though there is no such shape. And if I normalize the map, the max value always hikes to the max value even though its confidence is very bad.

So I am looking for a way to calculate the max value that template matching calculation can possibly make. (It's different from max value of the map)

Let's say template image size is 10x10 with gray scale. What is the max value of the map which is created by template matching method? I am using TM_CCOEFF_NORMED but I am open to other methods too depends on how easy calculating max value is.

res = cv2.matchTemplate(img,template,method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
// max_val is just a max value in res
// I wanna know how confident max_val is comparing to the possibly maximum value.
2015-11-02 19:01:12 -0600 commented question C# haar cascade can't read xml

Seems that old cacade xml and recent cacades have different format..

2015-11-02 18:51:58 -0600 commented question C# haar cascade can't read xml

It's not OpenCV's official wraper but it's in NuGet and widely used.

2015-11-02 18:20:23 -0600 asked a question C# haar cascade can't read xml

Hi,

I am having a weird problem and struggling with it now..

using( CvHaarClassifierCascade cascade = 
CvHaarClassifierCascade.FromFile( @"C:\opencv\2.4.11\opencv\sources\data\haarcascades\haarcascade_frontalface_alt2.xml" ) )

Above is my code. And this is really basic cascade loading function which I could find many places. But this code returns an error:

The node does not represent a user object (unknown type?)

I tried other xml files but still getting the same error..

Why did it happen and how do I fix it? Please help!!

2015-11-02 18:14:32 -0600 commented question Haar Cascade training outOfMemory error.. help!!

Thank you!

2015-11-02 16:06:51 -0600 answered a question Broken understainding - creating a classifier

Have you found a solution for this? I am also getting broken text in my output xml files..

2015-11-02 11:32:55 -0600 asked a question Haar Cascade training outOfMemory error.. help!!

Hi,

I need to find patterns in manually set ROI. The patterns are not complicated so I decided to use Haar Cascade Pattern Detection. In order to do that, I needed to train my samples first using "opencv_traincascade.exe" so I grabbed 38 positive images and a few negative images. For the positive images, I tried two different sizes, 100 x 64 and 50 x 32, to train.

But everytime I try run my opencv_traincascade.exe, it returns with outOfMemory error: "OpenCV Error: Insufficient memory error (failed to allocate 3.8GB) in cv::OutOfMemoryError, file: c:\builds\2_4_PackSlace-win32-vc12-shared\opencv\modules\core\src\alloc.cpp line 52"

When I tried two difference positive sample sizes, the memory size in the error message was exactly same so I don't think it's sample size.

The command I entered in my cmd was: opencv_traincascade -vec positive.txt -bg negative.txt -data output -featureType LBP

Why does this happen and how do I fix it? Please help!!

2013-11-14 16:42:57 -0600 asked a question Question about FlannBasedMatcher

Hi,

I am trying to train a set of patterns and find a match within a test image.

That being said, I have many descriptors from train data set:

cv::Mat descriptor1;
cv::Mat descriptor2;
cv::Mat descriptor3;
cv::Mat descriptor4;
cv::Mat descriptor5;

//put all train set descriptors in a vector
std::vector<cv::Mat> descriptors;
descriptors.push_back(descriptor1); ... descriptors.push_back(descriptor5);

//add and train
FlannBasedMatcher matcher;
matcher.add(descriptors);
matcher.train();

//match
cv::Mat descriptorTest;
matcher.knnMatch(descriptorTest, m_knnMatches, 2);

//ratio test to get good matches
std::vector<cv::DMatch> matches = ratioTest(m_knnMatches);

// the result matches after ratio test contains many DMatch for example:
DMatch (queryIdx: *, trainIdx: *, imageIdx: 1, distance: *.**}
DMatch (queryIdx: *, trainIdx: *, imageIdx: 2, distance: *.**}
DMatch (queryIdx: *, trainIdx: *, imageIdx: 0, distance: *.**}
DMatch (queryIdx: *, trainIdx: *, imageIdx: 1, distance: *.**}
DMatch (queryIdx: *, trainIdx: *, imageIdx: 4, distance: *.**}

As you can see, the DMatch objects in the vector are from different trained image - different imageIdx.

As the accuracy is still not very good, I want to try Homography estimation but I don't know how to do it with that kinds of result matches. Only Homography example I have is working on 1 train image and 1 test image. Can you give me some advices for implementing Homography estimation in this situation?

What else can you think of to improve accuracy as post processes?

2013-11-14 16:41:50 -0600 asked a question Question about FlannBasedMatcher

Hi,

I am trying to train a set of patterns and find a match within a test image.

That being said, I have many descriptors from train data set:

cv::Mat descriptor1; cv::Mat descriptor2; cv::Mat descriptor3; cv::Mat descriptor4; cv::Mat descriptor5;

//put all train set descriptors in a vector std::vector<cv::mat> descriptors; descriptors.push_back(descriptor1); ... descriptors.push_back(descriptor5);

//add and train FlannBasedMatcher matcher; matcher.add(descriptors); matcher.train();

//match cv::Mat descriptorTest; matcher.knnMatch(descriptorTest, m_knnMatches, 2);

//ratio test to get good matches std::vector<cv::dmatch> matches = ratioTest(m_knnMatches);

// the result matches after ratio test contains many DMatch for example: DMatch (queryIdx: , trainIdx: *, imageIdx: 1, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 2, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 0, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 1, distance: *.} DMatch (queryIdx: *, trainIdx: *, imageIdx: 4, distance: *.*}

As you can see, the DMatch objects in the vector are from different trained image - different imageIdx.

As the accuracy is still not very good, I want to try Homography estimation but I don't know how to do it with that kinds of result matches. Only Homography example I have is working on 1 train image and 1 test image. Can you give me some advices for implementing Homography estimation in this situation?

What else can you think of to improve accuracy as post processes?

2013-06-21 16:01:47 -0600 asked a question tvl1 optical flow not working

Hi,

I connected an web cam to the computer and I am trying to draw optical flow points on a window. Camera works and frames are being saved well. But DenseOpticalFlow.calc(...) keeps stuck in infinite loop. Here is my code:

int _tmain(int argc, _TCHAR* argv[])
{
VideoCapture cap(0);
if(!cap.isOpened())  // check if we succeeded
    return -1;

Mat frames[2];
Mat_<Point2f> flow;

Ptr<cv::DenseOpticalFlow> tvl1 = createOptFlow_DualTVL1();

namedWindow("screen",1);
int index = 0;
int prev_index;
for(int i=0;;i++)
{
    cap >> frames[index]; 
    cvtColor(frames[index], frames[index], CV_BGR2GRAY);
    prev_index = (index+1)%2;

    if(i>0)
    {
        tvl1->calc(frames[prev_index], frames[index], flow);  //infinite loop inside this method

        Mat out;
        drawOpticalFlow(flow, out);

        imshow("screen", out);
    }               
    index = prev_index;
    if(waitKey(30) >= 0) break;
}
return 0;
}

What is wrong in my code?

2013-06-13 12:30:48 -0600 commented answer Finding area center of rectangle

Thanks for the link. I have implemented the Centroid of polygon in your link and even posted my code in the page. However it also has chance to be placed outside of polygon.. :(

2013-06-12 16:09:31 -0600 commented answer Finding area center of rectangle

And I am pretty sure the midpoint will be placed outside of rectangle in the rectangle #6 in the image as well as it takes too long.

2013-06-12 16:06:57 -0600 commented answer Finding area center of rectangle

Shouldn't it be cgix=xsum/total_num_nonzero; cgiy=ysum/total_num_nonzero; ?

2013-06-12 15:55:02 -0600 commented answer Finding area center of rectangle

that's not gonna work. Equally shaped rectangle may have different midpoint depending over its rotation.

2013-06-12 15:15:30 -0600 commented answer Finding area center of rectangle

Thanks for the answer. But I don't quite understand what you are trying to say. Can you write some lines of pseudocode please?

2013-06-12 14:25:34 -0600 asked a question Finding area center of rectangle

I am trying to find an area center of various types of rectagles.

(Center of gravity and midpoint of 4 vertices never work so please think in different way)

Please see this image:

image description I have to find the position of position of red dots

Point vertices[4];
Point areaCenter;

I have to find areaCenter so (nearly)

area(areaCenter, vertices[0], vertices[1]) = area(areaCenter, vertices[1], vertices[2]) =

area(areaCenter, vertices[2], vertices[3]) = area(areaCenter, vertices[3], vertices[0])

I tried many different ways to find the mid point but none of them covered every types of rectangles.

Can any1 give me some idea?

2013-05-29 16:52:17 -0600 asked a question Best way of masking on warpAffine?

Hi,

Just like using mask when copying image:

image1.copyTo(image2, mask)

I just want to apply warpAffine only on a particular region because I only need a small part of image for example mouth. But existing warpAffine methods do not seem to accept any masks. Therefore I need to find a smart and easy way to do masking warpAffine method in order to reduce running time. Has anyone here thought about it before? Please give me some tips!

2013-05-27 17:10:19 -0600 asked a question need a help on MatOfPoint2f

I am rewriting WarpAffine related code in Android. But I can hardly get how to use MatOfPoint2f.

My previous code is like: (using OpenCVSharp)

CvPoint2D32f[] src_pf = new CvPoint2D32f[3];
CvPoint2D32f[] dst_pf = new CvPoint2D32f[3];
src_pf[0] = new CvPoint2D32f(0,0);
src_pf[1] = new CvPoint2D32f(100,100);
src_pf[2] = new CvPoint2D32f(100,70);
dst_pf[0] = new CvPoint2D32f(0,0);
dst_pf[1] = new CvPoint2D32f(100,100);
dst_pf[2] = new CvPoint2D32f(200,70);
CvMat perspective_matrix = Cv.GetAffineTransform(src_pf, dst_pf);
Cv.WarpAffine(src, dst, perspective_matrix)

In Android, the code should look like this:

MatOfPoint2f src_pf = new MatOfPoint2f();
MatOfPoint2f dst_pf = new MatOfPoint2f();
//how do I set up the position numbers in MatOfPoint2f here?
Mat perspective_matrix = Imgproc.getAffineTransform(src_pf, dst_pf);
Imgproc.warpAffine(src, dst, perspective_matrix);

How do I setup the position numbers in MatOfPoint2f?

2013-04-22 16:08:17 -0600 asked a question Detect concave using 4 points of a rectangle?

Hi,

I have 4 points of a rectangle, and I am trying to check if it has a concave in it.

CvPoint[] points = new CvPoint[4]; points[0] = new CvPoint(10,10); points[1] = new CvPoint(10,20); points[2] = new CvPoint(13,13); points[3] = new CvPoint(20,10);

There are several ways I can think of but none of them is useful and smart in terms of speed and memory. Anyone knows the best way to check if it has a concave?

2013-04-22 12:03:50 -0600 answered a question warpPerspective gives unexpected result

image description

Okay. I got it by myself. WarpPerspective() doesn't actually give us a right result when we want all contents in the image staying in a same relative position. WarpAffine is the answer. Only thing I think it's annoying is that it only transforms triangles.

2013-04-21 22:04:13 -0600 commented answer warpPerspective gives unexpected result

Dude. I attached 6 photos. Look at the upper right photo. Can't you see the four parts of the photo are discontinuous? And can't you imagine what the correct one should look like? Do you need more explanation on this?

Assuming a transformation is made from (0,0)(10,0)(0,10)(10,10) to (0,0)(10,0)(0,10)(15,10) on the image. Because only the bottom right point moves right, the pixels in the original image should stretch towards right. The more stretch in the lower part. They must not stretch to neither left, down nor up. Tell me if you need more explanation here. However, in the transformed image which WarpPerspective() resulted looks like the 6th photo in the attached. Please find the red horizontal grid I drew. The pixels in the original image actually shifted towards up!

2013-04-19 19:26:18 -0600 edited question warpPerspective gives unexpected result

Hi guys,

I am implementing "Quad Warping" in order to do fat/skinny face effect. Quad Warping is quite much work but I don't know if there is any other options to do the face effect things.

Anyways, I sort of completed writing code but the result image seems not right and I wonder if it is a bug in OpenCV. Please see the image below:

image description

Can you guys notice the pixels have been rotated up even though size of resulting rectangle is right? Is this a bug? or a right attribute of WarpPerspective() ?