# 3D mapping using Octomap with Turtlebot

Hi, (ROS - Electric; Ubuntu 11.04) This is in continuation to the question. I am trying to build a 3D occupancy map using the octomap_server with kinect sensor on top of the turtlebot. Following are the launch files i ran

roslaunch turtlebot_bringup minimal.launch


Turtlebot navigation package launches the /turtlebot_bringup/kinect.launch which has the throttling and fake laser for 2D mapping and localization purposes.

Now i try to run the

roslaunch octomap_server octomap_mapping.launch


I have modified the cloud_in and frame_id as given below

<launch>
<node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
<param name="resolution" value="0.05" />

<!-- fixed map frame (set to 'map' if SLAM or localization running!) -->
<param name="frame_id" type="string" value="/map" />

<!-- maximum range to integrate (speedup!) -->
<param name="max_sensor_range" value="3.0" />

<!-- data source to integrate (PointCloud2) -->
<remap from="cloud_in" to="/camera/depth/points" />

</node>
</launch>


But the occupied_cells_vis_array topic gets the marker_array in the camera_depth_frame which is orthogonal to the actual kinect frame where the fake laser is being published. Hence i changed the frame using

<param name="frame_id" type="string" value="/camera_depth_optical_frame" />


to get it to the frame of actual kinect. But this leads to the Map being build to be orthogonal to the kinect frame. The resulting image is attached .

I am sure that the gmapping is able to convert the camera_depth_optical_frame of the kinect to camera_depth_frame internally and creates the Map correctly. But when octomap_server is run this Map is again reframed. Please help me to resolve this issue.

edit retag close merge delete

Sort by » oldest newest most voted

Hi, My colleagues were successful in finding the solution.

Fixed frame in rviz as: /odom

Target frame as: /camera_depth_frame

In the octomap_server/octomap_mapping.launch file with frame_id as /odom. We are able to get the 3D mapping done.

more

I am trying the same. But actually I can produce a quite good map with gmapping but when I connect the pointcloud to octomap as you do it, the octomap has a lot of artefacts from the situations were the robot's position inside the map is not correct. Did you have this problem too?

( 2013-01-17 03:29:54 -0500 )edit

Yes, i mean the fusing of the maps are not so great as the 3D occupancy grids are not getting updated with respect to their probabilities I guess. It took a lot of effort to get a good map with Kinect sensor. I think either you need a good odometry or a good sensor to get a good map at the least.

( 2013-01-19 07:59:36 -0500 )edit

Try adding a Hokuyo laser to your robot for 2D. It will give you much better pose estimates in amcl or gmapping.

( 2013-01-20 22:27:52 -0500 )edit

hi, guys I also get octomap working. As pointed out by above, the 3D map is not updated; the image (with wrong location) stays in the map and will never be erased. I am wondering how do you guys solve such problems?

( 2013-07-10 14:24:59 -0500 )edit

Hi I cannot get the octomap working using the proposed solution above. There are no issues with Kinect connectivity. The camera is working as required, I just cannot see the visualisation of the pointcloud as shown in the example above? Could anyone help

( 2013-07-16 00:26:48 -0500 )edit

I was able to produce the 3d map, but the density of point cloud is very small. The points are very widely spaced. What might be the reason. How do I change this parameter?? Can you help?

( 2014-02-04 17:29:27 -0500 )edit