# Appropriate data format for 3D transformations.

I need to do various calculations between different coordinate systems, translation, rotation, projection to 2D in pixels, etc. It is necessary to define 3D and 2D points and 3x3 matrices of float. Which formats would you recommend for easy multiplication of 3 dimensional vector by 3x3 matrix? The options are Mat, Matx, Vec, Point, Point3, vector. It seems that not all of them can participate in matrix expressions. I guessed that the best way is to represent object points as Point3f ObjPt, screen projection as Point2i aka Point and rotation matrix as Mat R(3, 3, CV_32FC1, 0.0f), but then I can't do ObjPt * R. It is impossible to multiply native C++ vector by 3x3 Mat. I also can't multiply Matx31f by Matx33f (OpenCV 2.4.13.2). All the other types are derived from Matx. Hence, the only option available - to keep everything (transformation matrix, input and output points) as Mat.

edit retag close merge delete

2

somewhat an unsolved problem in opencv. (consistency, anyone ?)

also be very careful with the type, e.g. in calib3d, there are a lot of functions, where float data goes in, but double comes out.

( 2018-03-20 08:41:02 -0500 )edit
1

Yes, I encountered that issue with float - double too. Also updated the question.

( 2018-03-20 09:48:33 -0500 )edit
1

If you keep a vector of Point3f or Point3d, or a cv::Mat with 3 channels, you can apply the transform function to them. You can use the reshape function to help.

( 2018-03-20 21:18:35 -0500 )edit

The problem may be formulated as follows. On input image, colors are represented as Scalar(B,G,R) and coordinates as matrix indices. When we reconstruct 3D model, in principle, we should fill its internal space too. That is, define 3D matrix and encode color and coordinates similarly, but nobody does this. 3D objects are stored as surfaces which make 3D matrix meaningless because most of its elements will be empty. Dots will be stored as 1D matrix that is vector or list and each will have 3 values for color and 3 more for coordinates. That's the problem. Coordinates are represented differently on input image and in the model. An alternative may be to store 3D surface as a SparseMat. Thanks, Tetragramm. This is what's required. This way I can transform the whole surface.

( 2018-03-21 05:14:12 -0500 )edit