# Lidar depth-map interpolation using "guidance" image [closed]

Hey,

currently I'm working on a project to interpolate sparse lidar depth-maps/ to make them dense. Therefore I'm trying to use image as guidance information. Until now this works quite well, but I got a problem which I dont know how to solve it yet. I know thats not a specific ROS-Topic but I think a lot of you are quite familiar with computer vision topics, so I hope someone could help me.

What I'm trying right know is: Iterating through all unknown depth values in sparse lidar depth-map. The unknown pixel has an corresponding RGB value from the image. Know I am searching for the nearest known depth values (their RGB values are also known). With help of their relation between depth and RGB value, I can replace the unknown value by a predicted one (e.g. by linear transformation from RGB to depth).

My problem: There are some errors in the depth-map (caused maybe by motion of vehicle, spinning-rate of lidar -> each vertical-lidar-line is aquised in different time) so I got for small objects multiple depth values. This can be seen in the image Here. For that points the same RGB value has multiple depth values which ruins my algorithm. There are a lot of algorithms which has solved the interpolation task like: Here and they dont suffer from these problems. Unfortunately none of these algorithms has reported about how to solve that problem in their paper. Is their any known strategy to preprocess lidar pointclouds/depth-maps? Does anyone have an idea how I can solve that problem?

Has anyone an idea how to solve it?

Best

Horst

Please restrict ros answers questions to ROS questions