Using OpenCV for bullet time photograpy alignment [closed]
Hi all,
Recently I started building a bullet time photography rig consisting of 24 cameras that are placed on a 360 circle. I am facing a problem aligning the images, all cameras trigger exactly at the same moment and all pictures are saved on a computer following the scheme 1.jpg, 2.jpg, 3.jpg and so on. ( each digit represent the camera position on the rig ). All cameras are fixed very stable onto position but it is impossible to align them perfectly.
I am trying to figure out a way to align and maybe interpolate some frames in order to make the final animation better. So far my idea was to first shoot a picture with ( let's say ) a red ball in the middle, figure out the position of the red ball inside each picture and build a JSON file ( or any other format ) containing the offsets of the ball inside each picture. After this process I guess I can use that information to further align other photos ( since the cameras are fixed in the same position and do not move ) without any ball in the middle ( using the JSON data from the calibration photo ).
I guess that pictures must be aligned on X Y axis and also we need to align the rotation in case the cameras are not perfectly leveled.
Here's my idea of making it:
Before starting to do the actual shooting I need to calibrate the rig, the steps for this should be: 1. Place a ball or any other shape object suspended in the center of the circle, all cameras pointing at this ball. 2. Take a picture using all cameras 3. The photo from camera 1 will be considered the reference shot 4. Detect the position of the ball inside each picture and build a mapping, I mean: Picture 1: ball is at 320 x 360 px Picture 2: ball is at 325 x 376 px Picture 3: ball is at 321 x 350 px
Since picture 1 is our reference and the ball in this picture is at 320x360px then I know that I have to crop the rest of the pictures so that the ball ends up at the same 320x360px. A better explanation would be that I need to crop and offset the area in relation with the coordinates of the ball in the first picture.
After we calculate the necessary coordinates to crop and offset the pics then we can further use this information to stabilize all the pictures taken with the camera rig with the condition that nothing moves ( structure, cameras, etc... ).
In order to take in account also the rotation in each picture I think I need to use a rectangular object as the reference.
What do you think about this method and what is the simplest method to achieve it ?
I have no experience with 360° camera rig but my suggestion would be to use a chessboard calibration pattern usually used for camera calibration (a tutorial here).
As not all the cameras would be able to see the chessboard pattern at the same time, I think that you will have to use 4 positions of the calibration pattern or build a custom chessboard calibration pattern.
At the end, you will be able to get the transformation between each camera relative to any other cameras.
Eduardo's suggestion is right. As a refinement, set it up so that you can see the pattern from as may cameras as possible, and calibrate them. Then turn the pattern so that half of the first set, and that many more of the rest can see it and repeat. Then you can orient them all to a common coordinate system. It's probably easier than trying to make multiple patterns that are a known orientation to each other.
Yes, you are right. I am thinking of making a round chess board pattern that I will suspend in the middle.
What would be the workflow for this ? I am not very familiar with OpenCV. - Get the coordinate of the chess board - Crop the image so the chess board ends up in the same spot in each picture ? - How to take in account the rotation ?
Thanks !
Take a look at the calibrate camera tutorial. Do that for each camera to get the camera matrix and distortion.
Then place the chessboard where <group1> cameras can see it and run the estimate pose to get tvec and rvec for each of the cameras. Then turning the board to where half of <group1> and ? many more cameras can see it, do it again. You can get the relationship between where the board was the first time and the second time using the cameras that could see it both times.
http://docs.opencv.org/2.4/doc/tutori...
There are two steps the chessboard is useful for. The first is the second of those tutorials. It helps you find the distortion of the lens and camera system. You do this independently for each.
The second is the first tutorial at that link. Given a picture of the chessboard, wherever it is in the image, the solvePnP function can find the orientation of the camera relative to the chessboard. Both translation and rotation. So if the chessboard is in the same place and multiple cameras can see it, you can tell how each of them is oriented to each other as well as the chessboard.
You are correct but I don't think I need the complexity of the chessboard. I mean, it is difficult to build a pattern that all cameras will see and if I go with calibrating batch of cameras it will take too long. Another idea is to suspend two red balls in the center of the rig, one ball at the top and the order one exactly under the first one at about 20cm, so I will end up having two red balls that I know they are perpendicular to the floor plan. Then I am thinking I need to find the center of both balls than I can use those coordinates to also shift the crop X/Y and also correct the rotation ( since I have to balls I can draw a line between them that will represent the rotation ). What do you think about this method ? Are there ...(more)
In my opinion you should go with the chessboard pattern because:
What I would do is:
If you are familiar with homogeneous transformation matrix or you can search on the web for more information about it (or here):
If you are not familiar with camera calibration it will take some time but you will have all the information.
PS:
didn't realize that I have basically repeated the same thing as I read in diagonal but at least you have two similar opinions.
Right, the reason the two balls won't be enough is this. That will calibrate the system for one point, but you want your interpolated images to look good over the whole stage or scene. If one of the cameras is offset by a bit, then you can get the point with the balls to look correct, but any part that's off to the side can be messed up.
Also, you may be over-estimating the time it takes. Running the camera calibration will take less than two minutes per camera (once you have the code set up), and it needs to be done only once ever, then you can save the data to re-use. It doesn't change when you move the camera.
Tetragramm, you're right, aligning only on the center point indeed is not enough. Using this method the center is correctly aligned but things on the side can sometimes be wrong.
How can I contact you? Please give me a Skype or add me, my id is: mihail.tudoran