Hello Nirmal,
This is what you need.
http://wiki.ros.org/navigation/Tutori...
This will make sure that there are two things:
The odom topic will have the details for odometry information - the type for that topic is nav_msgs/Odometry.
In tf there will be a new transform between base_link and odom. The transformation from odom to base_link - defines the position and orientation of base_link in the odom frame - is the same as the position/orientation part of the odometry message as well. This should be apparent from the code.
However, I am not sure how you exactly compute the velocities/distances in that tutorial.
Given that you already have the ticks, I guess all that you have to do now is this:
- Convert from to ticks to distance travelled or velocity - I think distance travelled will give better accuracy.
- Then simply adapt that code alongwith the tutorial to post odom information.
Once you have this you have to setup the navigation stack and you can run Monte Carlo localization.
Have a great day!
if you want 2d map use hector slam it doesn't need odometry.
Do you want to use the kinect's 3d data or just 2d (planar) data as a pseudo-laser? If it is just the 2d data, it is relatively simple.
I am using the kinect for mapping, i can take the 3d data from the kinect and convert it to 2d for a pseudo-laser by pointcloud_to_laserscan node. I need even odo data, so how to publish the odo data.
how you are moving your robot ?
What robot is it? It is a custom robot? What odometry sensors do you have?
Yes it is a custom robot, running with a 32-bit arm controller, it has wheel encoders from which I can fetch the ticks of the individual wheels. will this be sufficient, but still the question remains how to publish the odo data for using with the navigation stack.