Lidar depth-map interpolation using "guidance" image
Hey,
currently I'm working on a project to interpolate sparse lidar depth-maps/ to make them dense. Therefore I'm trying to use image as guidance information. Until now this works quite well, but I got a problem which I dont know how to solve it yet. I know thats not a specific OpenCV-Topic but I think a lot of you are quite familiar with computer vision topics, so I hope someone could help me.
What I'm trying right know is: Iterating through all unknown depth values in sparse lidar depth-map. The unknown pixel has an corresponding RGB value from the image. Know I am searching for the nearest known depth values (their RGB values are also known). With help of their relation between depth and RGB value, I can replace the unknown value by a predicted one (e.g. by linear transformation from RGB to depth).
My problem: There are some errors in the depth-map (caused maybe by motion of vehicle, spinning-rate of lidar -> each vertical-lidar-line is aquised in different time) so I got for small objects multiple depth values. This can be seen in the image here. For that points the same RGB value has multiple depth values which ruins my algorithm. There are a lot of algorithms which has solved the interpolation task like: here and they dont suffer from these problems. Unfortunately none of these algorithms has reported about how to solve that problem in their paper. Is their any known strategy to preprocess lidar pointclouds/depth-maps? Does anyone have an idea how I can solve that problem?
Has anyone an idea how to solve it?
Best
Horst
please put your images here, not on an external bin, where they will expire, thank you.
there are lots of "depth from single image" cnns out there nowadays, maybe you can use one of those
https://github.com/yinyunie/3D-Shape-...
yeah your are totally wright, there are multiple CNN to solve this problem (http://www.cvlibs.net/datasets/kitti/...) .But actually I want to do this on my own, but I've got no clue how to handle the mentioned error