How to best generate a Point Cloud from IR data [closed]

asked 2013-08-06 02:20:23 -0500

Hendrik Wiese gravatar image

Hi folks,

at the moment I'm generating a point cloud from IR sensor data which I have to rely on due to lacking a laser scanner. It works but I suppose there are things I could optimize.

The way I generate the data is to simply receive the distance values of four IR sensors attached to my robot at given positions, transforming the values into /odom coordinates using tf and appending the new transformed point to a global point cloud instance that I publish over and over again. It works but results in a huge amount of data being scooped around and doesn't take wrong points into account. They are treated the same way as correct points and sent over to the navigation stack...

Well, I'd like to know what's the best way in your opinion to get from four IR sensor distance values to a good point cloud. (using Python)


edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by tfoote
close date 2016-04-27 01:58:54.770997


Two things: 1) why not just create lots of 4 point point clouds? 2) what is a wrong point in this context?

David Lu gravatar image David Lu  ( 2013-08-06 05:57:07 -0500 )edit

1) How do I do that? 2) Invalid sensor data that created a point in the point cloud that doesn't exist in reality. Like when odometry data is invalid only for a single cycle and valid range sensor data is just transformed to the wrong location. Then there's an invalid point in the cloud.

Hendrik Wiese gravatar image Hendrik Wiese  ( 2013-08-07 05:18:54 -0500 )edit