Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Generalized Hough Transform (Guill) - Guide to Output Points

I am using the GPU version of GeneralizedHough (Guill specifically). For those thinking of using the GHT, the good news is that the GPU version is MUCH faster than the CPU version. On my sample images, the CPU version takes ~70 seconds. With the same parameters, the GPU version takes ~0.3 seconds. That said, I do not see much documentation and I am hoping that someone can explain a few things to me.

I have four questions:

1) What does each point in the output point vector represent? The returned vector has a series of points, for each point, there is the point itself ([0], [1]), and then a scale and a rotation ([2], [3]). But what does the point represent? Is it a point in the query image that represents the bottom/left (or top/right, center point, point on the contour of) of the template image?

2) Would it improve the results/speed if the template image was a bounding box of the enclosed contours? My template image has a single contour in it that is much smaller than the whole image. Would it affect things if I reduced the template image size to the size of the boundRect of the included contour?

3) In the Guill version, how does the PositionVotesThreshold affect things? If I increase this number will I be forcing it to find better matches?

4) Given a point in the template image, and a returned point from GHT, how can I calculate the corresponding point in the query image? In my application, I figure out a point on the interior of the contour of the template image. How can I calculate the corresponding point in the template image?

I am hoping that answers to these questions will be useful to anyone trying to use GHT (which seems great).

Thanks for any help.

Generalized Hough Transform (Guill) - Guide to Output Points(Guill)

I am using the GPU version of GeneralizedHough (Guill specifically). For those thinking of using the GHT, the good news is that the GPU version is MUCH faster than the CPU version. On my sample images, the CPU version takes ~70 seconds. With the same parameters, the GPU version takes ~0.3 seconds. That said, I do not see much documentation and I am hoping that someone can explain a few things to me.

I have four questions:

1) What does each point in the output point vector represent? represent?

The returned vector has a series of points, for each point, there is the point itself ([0], [1]), and then a scale and a rotation ([2], [3]). But what does the point represent? Is it a point in the query image that represents the bottom/left (or top/right, center point, point on the contour of) of the template image?

2) Would it improve the results/speed if the template image was a bounding box of the enclosed contours? contours?

My template image has a single contour in it that is much smaller than the whole image. Would it affect things if I reduced the template image size to the size of the boundRect of the included contour?

3) In the Guill version, how does the PositionVotesThreshold affect things? things?

If I increase this number will I be forcing it to find better matches?

4) Given a point in the template image, and a returned point from GHT, how can I calculate the corresponding point in the query image? image?

In my application, I figure out a point on the interior of the contour of the template image. How can I calculate the corresponding point in the template image?

I am hoping that answers to these questions will be useful to anyone trying to use GHT (which seems great).

Thanks for any help.