I have spotted a good introduction at
https://docs.opencv.org/2.4.13.2/modu...
From this description one can sort out what functions are available and which to use for which tasks.
So far as I understand, solvePnP() finds the pose as rotation + translation, but first I must 1. Define my own (arbitrary) coordinate system and present (InputArray) objectPoints in this system. 2. present (InputArray) imagePoints provided by
vector<point2f> corners; //this will be filled by the detected corners
bool patternfound = findChessboardCorners(gray, patternsize, corners,...);
corners are used as imagePoints. They are 2D coordinates in image space. If the shape of the object is known, I can do full 3D reconstruction from just 1 image taken by just 1 calibrated camera. If it is unknown, binocular vision or exact camera location in case of the single camera is needed. Then, one can take at least 2 stereo pictures from opposite sides and calculate 3D coordinates of characteristic points.
can you point us to the tutorial you mean ?
https://docs.opencv.org/master/dc/d43...Camera calibration with square chessboard
this is actually about pose reconstruction using solvePnP, not about reconstructing a 3d model.
(and sadly, no sample code, apart from the (sloppy) tutorial)
for 3d reconstruction, you'd either need
main problem now is: what are you trying to achieve, exactly ?
What do you mean by pose reconstruction? Determining the space orientation of the known object? In this case, the full-scale reconstruction may be broken into 2 sub-tasks: object recognition and its pose reconstruction. I would prefer using a single camera which can move around an object and take pictures from different positions. For the beginning, the object may be simple. Let it be a cube.
"Determining the space orientation of the known object? " -- yes, exactly, that's another word for it.