So here's my hypothesis:
I plan to calibrate a camera (as a first step) using a non-checkerboard pattern. What I kind of hypothesise is using one marker point per image, for which I know exactly where it is located in 3D coordinates. So basically I have the (x,y) in camera coordinates and (X,Y,Z) in world coordinates. I then take a certain amount( say 30-40) images of the marker in different locations there by generating 30 image points and 30 world points.
Would the calibrateCamera method work in such a case? Any inputs?
Before anyone starts asking try it out, I do plan to try it out during the weekends when I get time off my university schedule. This question is just to get a head start by then.
Cheers, Sanjay