Building point cloud using input from position, orientation and laser scan data
I am willing to build a point cloud taking input from position and orientation (pose message) data gathered over time along with laser scan message. In achieving this, of course I need to publish a topic of type PointCloud, but, then, the processing procedures, I'm not really sure.
The stored pose data looks like below:
Here is a loop of the data gathered reading some values row by row..
{btQuaternion q(pcl::deg2rad(data_set[t][8]), pcl::deg2rad(data_set[t][7]),
pcl::deg2rad(data_set[t][6]));
pose3D.position.x = data_set[t][3];
pose3D.position.y = data_set[t][4];
pose3D.position.z = data_set[t][5];
pose3D.orientation.x = q.x();
pose3D.orientation.y = q.y();
pose3D.orientation.z = q.z();
pose3D.orientation.w = q.w();
}
Could someone provide me any code snippet or example that does the functionality.