Robotics StackExchange | Archived questions

How to best generate a Point Cloud from IR data

Hi folks,

at the moment I'm generating a point cloud from IR sensor data which I have to rely on due to lacking a laser scanner. It works but I suppose there are things I could optimize.

The way I generate the data is to simply receive the distance values of four IR sensors attached to my robot at given positions, transforming the values into /odom coordinates using tf and appending the new transformed point to a global point cloud instance that I publish over and over again. It works but results in a huge amount of data being scooped around and doesn't take wrong points into account. They are treated the same way as correct points and sent over to the navigation stack...

Well, I'd like to know what's the best way in your opinion to get from four IR sensor distance values to a good point cloud. (using Python)

Thanks

Asked by Hendrik Wiese on 2013-08-06 02:20:23 UTC

Comments

Two things: 1) why not just create lots of 4 point point clouds? 2) what is a wrong point in this context?

Asked by David Lu on 2013-08-06 05:57:07 UTC

1) How do I do that? 2) Invalid sensor data that created a point in the point cloud that doesn't exist in reality. Like when odometry data is invalid only for a single cycle and valid range sensor data is just transformed to the wrong location. Then there's an invalid point in the cloud.

Asked by Hendrik Wiese on 2013-08-07 05:18:54 UTC

Answers