Stereo cameras-render virtual object on second camera if processing was done in the other
Hi there! I have a pair of stereo cameras connected together mounted on oculus. I have made a program to track a marker and render a virtual object over the marker using one of the cameras. Let's say that ai want to render the object using opengl in the second camera,not by detecting again the frame,but by using the first camera's properties and view. What do I need to do? I need a rotation and translation matrix between the 2 cameras right? Do I need to get these through stereo calibration in opencv? http://docs.opencv.org/modules/calib3...