Is a kinect a accurate way to simulate a Velodyne 16?
Hello everyone!
I'm starting a project with ROS that needs a reliable simulation world with a Velodyne 16 sensor. The catch is I don't have access to the lidar sensor during a couple of months and was planning to use the simulation environment to start working on some plan algorithms that the real robot will use.
So far my options for modeling the Velodyne 16 are: a) a kinect (outputs a point cloud) b) the sensor built in http://gazebosim.org/tutorials?tut=gu... . (this outputs laser rays)
Since I can't really experiment with the real thing, I'm not sure if it is reliable to use a) and it's point cloud and assume it's equivalent to get laser ray output with velodyne_driver and then transform it to a point cloud with velodyne_pointcloud.
I'm using hydro.
Thank you for any input.
I assume you are referring to something different from the ROS Kinetic distro, so no idea what you are asking about.
I edited the question, I think now it's more explanatory.
So, what is a "kinetic"?
Sorry, kinect ... stupid typo...