ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

background subtraction

asked 2018-08-14 07:14:44 -0500

th6262 gravatar image

updated 2018-08-14 07:14:56 -0500

Hey,

given an indoor fixed mounted lidar scanner, is there a simple way to apply a background subtraction to a pointcloud?

More specifically, is there way to subscribe to the incoming Point Cloud topic of the lidar, compare it with a previously captured PointCloud (or PCD file) of the fixed surrounding and only output those objects, which are not part of the fixed surrounding.

edit retag flag offensive close merge delete

Comments

1

You could simply filter out points that are not closer than the background scan. The main challenge would be aligning the scans if they're not fixed structured clouds.

PeteBlackerThe3rd gravatar image PeteBlackerThe3rd  ( 2018-08-14 07:57:23 -0500 )edit

hello again! Basically the Scanner is fixed and mounted in the top corner of a room. I could capture a pcd file of the "empty" room ( only fixed objects).

I would like to compare the live point cloud with the fixed cloud and filter out every point in the live cloud which is also in the fixed cloud.

th6262 gravatar image th6262  ( 2018-08-14 08:00:21 -0500 )edit

By "filter out" I mean, I want to end up with a pointcloud of only the objects which aren't part of the fixed cloud.

Note: The room has multiple "fixed" obstacles, so a simple ground plane removal would't be enough in this case, hence the background removal approach.

th6262 gravatar image th6262  ( 2018-08-14 08:03:36 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2018-08-14 08:11:36 -0500

This depends on they type of LIDAR scanner you're using. Does it produce structured point clouds, a 2D grid of points of the same shape and size each time? If this is the case then there should be a one-to-one mapping between the points of any two point clouds. In this case you can simply compare the distance of the new point from the sensor to the distance of the background point, and only include it if it's closer (plus a noise threshold).

If there isn't a one-to-one mapping between scans then your job is slightly harder, because you'll have to work out this mapping point by point and decide what to do if the current scan has points which don't correspond to the background scan. You may even need to interpolate the background scan to generate a suitable matching point.

Processing it as a depth map is simpler and far more efficient than processing it as an unstructured point cloud.

Hope this makes sense.

edit flag offensive delete link more

Comments

I'm using this scanner: SICK MRS6000

So II suppose it's unstructured 3D Point Cloud Data. What you've describred in the first part of your reply was my initial approach, however your answer explains..

th6262 gravatar image th6262  ( 2018-08-14 09:24:00 -0500 )edit

..why I couldn't work it out / find any documentation on it. I suppose it would be computationally infeasible to compare every point in one scan to every point in the static scan and then decide based on distance+treshold

th6262 gravatar image th6262  ( 2018-08-14 09:24:30 -0500 )edit
1

my thought was that this approach could be way more efficient than down-sampling, clustering, etc. (the classical approach)

th6262 gravatar image th6262  ( 2018-08-14 09:26:50 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2018-08-14 07:14:44 -0500

Seen: 718 times

Last updated: Aug 14 '18