Help in understanding the theory to select optimum fringe frequency in three phase fringe structured light

asked 2019-02-11 04:38:41 -0600

JT3D gravatar image

updated 2019-02-11 04:46:43 -0600

berak gravatar image

Hi guys,

I'm hoping someone can help with the following.

I am trying to understand better how the theory about what makes the best fringe pattern for three phase structured light. Where I am struggling is how to select the optimal fringe frequency. This is probably best explained by an example and an image of a pattern

Lets say I have a fringe pattern and the fringe frequency is 16. See this example C:\fakepath\Pattern_0.png

Now suppose I change the fringe frequency to 24? What effect does this have? The theory is that the measurement will be more accurate but I don't understand why.

Any help clearing up this issue would be greatly appreciated (Please don't just point to a paper online about this subject. I have yet to come across one that simply explains why)

JT

edit retag flag offensive close merge delete

Comments

the more you increase the frequency, the smaller the difference between your 3 phase map gets.

you'll gain local accuracy, but loose global "intensity". imho, it's some tradeoff, you have to find out on our own (distance matters, too !)

some (okish) examples:

image descriptionimage descriptionimage description

berak gravatar imageberak ( 2019-02-11 05:04:55 -0600 )edit

Hi,

Thanks for your reply.

So you are saying that by increasing the fringe frequency it help with a more accurate phase calculation as you are working over a smaller area?

If that is the case what I can't get my head around is how many physical pixels are allocated to each fringe period. I would have thought that having a higher fringe period would mean that the projected fringe pattern itself would only be made up of a smaller number of pixels in the horizontal, and so not as accurate. How can the phase accuracy be maintained if there are a smaller number of pixels assigned in the projected pattern and as a result in the acquired camera images. Does that make sense to you? Am I missing something in the theory of how the phase is calculated that means this is not a problem?

JT3D gravatar imageJT3D ( 2019-02-11 05:17:55 -0600 )edit

We have a structured light system with three different camera pairs in our lab and if we want to gain a higher accuracy we have to change the field of view, i. e. change/shrink the base between the cameras and change both cameras. So I think you cannot get a higher accuracy by changing the fringe frequency alone. We learned in our courses that it is common that you first use a coded light pattern (german: Codierter Lichtansatz) using dark and white areas to determine the correct projection line and prevent ambiguities and after that you use the fringe projection to enhance the accuracy.

Grillteller gravatar imageGrillteller ( 2019-02-11 06:16:08 -0600 )edit

I do agree - reducing the field of view will result in a better accuracy. I'm still struggling with selecting an optimum fringe frequency and why I should choose for example 24 over say 16. Also I presume there must be a point at which increasing the fringe frequency starts having the opposite effect as physically you would not be able to distinguish them through the camera.

JT3D gravatar imageJT3D ( 2019-02-13 03:39:16 -0600 )edit

Ok, I read the script again :D. I think you are right. The problem is that you have to be able to distinguish your projected lines in your images. If you enhance your fringe frequency at a certain point you cannot distinguish your lines on the object anymore. The maximum frequency is thus given by the resolution of your camera, the efficiency of your projector and the 3D structure of your object. The Moiré effect also seems to play an important role. Since the producers of these systems don't publish much I cannot find an equation to determine the ideal frequency.

Grillteller gravatar imageGrillteller ( 2019-02-14 02:45:05 -0600 )edit

everything is given by shannon-nyquist. I never used a such system but i can imagine that spatial wavelength light on your sensor must be greater than pixel size if optical lens your light sensors and camera are equals (light and camera must be at same location)

LBerger gravatar imageLBerger ( 2019-02-14 02:59:05 -0600 )edit