Ask Your Question
1

360° lens for panoramic view and depth

asked 2018-06-26 10:43:01 -0600

marcusbarnet gravatar image

updated 2018-06-27 02:53:05 -0600

Hi to all,

I'm very sorry if my question is too much stupid, I tried to search on google for possible answers, but I haven't found a lot about this topic.

I would like to use a 360-degree lens with my RGB camera (or to directly use a 360° camera) in order to get a panoramic view with depth information. To be more precise, I would like to get a 3D view in a panoramic video.

Several years ago, I used OpenCV to implement stereo vision by using two RGB cameras with a fixed baseline. Should I implement something similar with two 360-degree cameras or can I obtain the 3D view by using just a single camera?

Is there any sample or document which I can read to get more information?

Thank you!


EDIT 1 - 27/06/2018 : figures and 360-degree lens


In order to make things a little bit clearer, I tried to draw a figure about my system. I need to put the camera (or two cameras) at the bottom of a scope and use it to inspect a black box. I need to be able to see everything which is in front of the camera with a 3D panoramic view in order to get also the depth information. Usually, the scopes only have camera with a very narrow FOV (only few degrees) and so you cannot get an overall idea about the content of the box, moreover, it's like you have only an eye and so you do not have any idea about the depth. I would like to use 360 degree lens to maximize the FOV and I also need to get an idea about the depth.

image description

The scopes are usually like the ones in this figure and have a very limited FOV:image description

The lens I would like to use is something like this one (or similar):

image description

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
1

answered 2018-06-26 15:19:23 -0600

swebb_denver gravatar image

If I understand what you are asking, you can't get depth information form a single camera image. With two calibrated cameras, or a single calibrated camera in motion, you should be able to do something like you did before with your stereo vision setup.

edit flag offensive delete link more

Comments

I will need this application for inspections, so I will have to record almost static scenes. So, it would be better to use two cameras and try to implement the stereo vision, thank you for your tip. Do I need to consider some other aspects, or will it be exactly the same thing even if now I'm using 360 degree lens?

marcusbarnet gravatar imagemarcusbarnet ( 2018-06-26 16:03:06 -0600 )edit

In theory you could stack two 360 degree lenses, stereo baseline distance between them, and perform a narrow stereo disparity calculation (and resultant depth map) on them across much of their image extent. These cameras are vertically offset so they are out of each other's view. Note the image information returned is a bit different information than two eyes would return; eyes are horizontally offset; the "shadowed" areas where there is no depth information will be above and below items rather than left or right.I'd also be concerned that it may take a more complex distortion model and calibration for the 360 degree lenses, than with conventional simple convex lenses or pinhole model.

opalmirror gravatar imageopalmirror ( 2018-06-26 17:38:13 -0600 )edit

You might find this stereo rig calculator to be helpful in determining desired baseline, etc. (to achieve required depth resolution and other parameters) https://nerian.com/support/resources/...

I'm not sure what you mean by 360 deg lens. Is this a fisheye lens which images a half sphere, or something more sophisticated (can the camera see "behind" itself?).

Have you thought about how you are going to calibrate the cameras?

swebb_denver gravatar imageswebb_denver ( 2018-06-26 18:13:09 -0600 )edit

I have done a lot of testing with various wide FOV lenses (including fisheye) and have found the rational distortion model to do very well up to 150deg or so FOV, even with significant distortion. However for true fisheye lenses the fisheye model works better (particularly at the extremes of the image). Unfortunately (at least with older versions of OpenCV) the calibration was less stable (didn't converge as reliably) and the undistort functions were specific to the fisheye model.

I have been able to achieve very good reprojection error (~0.1 pixel rms) by using a high quality calibration Charuco calibration target. In my experience the Charuco target is the only way to go with high distortion / wide FOV lenses (because it works without having to see the entire target).

swebb_denver gravatar imageswebb_denver ( 2018-06-26 18:16:06 -0600 )edit

I typically have to bootstrap the calibration with Aruco calibration, plus 2 or 3 passes of iterative filtering - the initial Aruco calibration just isn't good enough to predict the image locations of the other features (chessboard corners), so you end up with bad measurements at the extremes of the image. After 2 or 3 iterations I typically am able to get the majority of the visible points.

swebb_denver gravatar imageswebb_denver ( 2018-06-26 18:18:39 -0600 )edit
1

Thank you for all your suggestions! I added more information on my question so you can get more information about what I would like to do. The lens is like a fisheye and the camera is mounted at the end of the scope like showed in the above figures. I hope you can help me :)

marcusbarnet gravatar imagemarcusbarnet ( 2018-06-27 02:55:31 -0600 )edit

Ah, then my comments about narrow camera stereo disparity are not relevant. With wide FOV cameras, it would seem to me, the depth information will be most accurate in the volume closest to the plane perpendicular to the baseline between the cameras.The farther you get away from that plane, toward the poles pointed to by the ends of the baseline segment, the less is the range of the disparity - at those poles you have effectively zero baseline. However, if you are able to rotate the cameras with respect to each other, rotating the baseline, you may be able to accumulate and merge point cloud data over time to map the entire volume. This would require external calibration at each rotation value.

opalmirror gravatar imageopalmirror ( 2018-06-27 13:07:46 -0600 )edit

Thank you for your support! Since the scope is very tiny and small, I think I won't be able to use a wide baseline. At maximum, I can setup a baseline of about 10 mm. Do you think it is too low? Is there any restriction on the length of the baseline and the size of the lens?

marcusbarnet gravatar imagemarcusbarnet ( 2018-06-27 15:02:50 -0600 )edit

Question Tools

2 followers

Stats

Asked: 2018-06-26 10:43:01 -0600

Seen: 924 times

Last updated: Jun 27 '18