pointclouds2 tf from map frame to sensor frame for octomap
I am simulating my robot with the kinect in a created world in Gazebo. I can visualize the point clouds and images from RGB and Depth cameras in RVIZ. I want to take those point clouds and create a map using octomap. In order to do it I have to do a tf from map frame to sensor frame. (Octomap Info, check slide 20). I don't know if I should create somehow a tf broadcaster for map frame or create a node that listens to the pointclouds topic and publish to ??? or how to do it. My tf tree starts from base_link and then goes to robot links and camera link.
I have been checking the tf tutorials but don't really understand how to do this. The next step is to call the octomap_server node through the following launch file.
<launch>
<node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
<param name="resolution" value="0.05" />
<!-- fixed map frame (set to 'map' if SLAM or localization running!) default 'odom_combined'-->
<param name="frame_id" type="string" value="odom_combined" />
<!-- maximum range to integrate (speedup!) -->
<param name="sensor_model/max_range" value="5.0" />
<!-- data source to integrate (PointCloud2) -->
<remap from="cloud_in" to="/depth_cam/openni/depth/points" />
</node>
</launch>
many thanks.