ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
No, you should have at least transforms from /base_link
to sensor frames. For example, you can look at this file rgbd_launch/launch/kinect_frames.launch to see how Kinect's frames are configured. Then to link to /base_link
:
$ rosrun tf static_transform_publisher 0 0 0.2 0 0 0 base_link camera_link 100
$ rosrun tf static_transform_publisher 0 0 0.15 0 0 0 base_link base_scan 100
Here the camera is 20 cm over /base_link
and the laser is 15 cm over /base_link
.
cheers
2 | No.2 Revision |
No, you should have at least transforms from /base_link
to sensor frames. For example, you can look at this file rgbd_launch/launch/kinect_frames.launch to see how Kinect's frames are configured. Then to link to /base_link
:
$ rosrun tf static_transform_publisher 0 0 0.2 0 0 0 base_link camera_link 100
$ rosrun tf static_transform_publisher 0 0 0.15 0 0 0 base_link base_scan 100
Here the camera is 20 cm over /base_link
and the laser is 15 cm over /base_link
.
EDIT
The rgbd_slam dataset launch file assumed some defined TF in the rosbag and have a hacky solution to show both ground truth (/world->/kinect) and estimated trajectory in RVIZ at the same time. Normally, if you have standard topics recorded in your rosbag, you could follow this tutorial. Assuming you don't have any TF in the bag (only /camera/rgb/image_color
, /camera/depth_registered/image_raw
and /camera/rgb/camera_info
):
$ roscore
$ rosparam set use_sim_time true
$ roslaunch rgbd_launch kinect_frames.launch
$ roslaunch rtabmap_ros rtabmap.launch
$ rosbag play --clock recorded.bag
Here kinect_frames.launch will publish /camera_link
to rgb and depth camera optical links. By default rtabmap.launch uses camera_link
as fixed frame, so nothing to change.
cheers