RGBDSLAM with stereo & IMU data
I would like to use RGBDSLAM using stereo cameras and an IMU for odometry. Following these questions:
http://answers.ros.org/question/42947/how-to-combine-robot-odometry-with-rgbd-slam-algorithm-for-mapping/ & http://answers.ros.org/question/41266/rgbd-slam-and-stereo/ & http://answers.ros.org/question/33175/replacing-gmapping-with-rgbdslam/
I have set up my launch file so that I have the left camera image(from stereo_image_proc) as "wide_topic" and points2 as "wide_cloud_topic"
Also since I am not using the openni_driver I have the following configuration:
I have played around with these parameters. It seems that no matter what I do I receive warnings from rgbdslam that are caused by not being able to look up transforms, i.e.
Lookup would require extrapolation into the past. Requested time 1347501971.980839144 but the earliest data is at time 1348181126.395687059, when looking up transform from frame [/camera_optical_frame] to frame [/base_link]
This is always followed by the warning:
Using Standard kinect /openni_camera -> /openni_rgb_optical_frame as transformation.
The map that is generated is obviously poor and I believe it is because of this transformation.
I am new to the concept of tf. I have gone through the tutorials but still feel a little bit shaky on how it works. I have built a robot model that has the following tf tree:
Link: base_link has 1 child(ren) child(1): chassis child(1): left_camera child(1): left_camera_lens child(1): camera_optical_frame child(2): right_camera child(1): right_camera_lens
I have not begun to integrate robot_pose_ekf because I do not feel that I have this part of the process down.
Can someone explain to me how the tf tree should look? How is the visual odometry broadcasted by rgbdslam? Can this information be integrated into robot_pose_ekf? and can IMU odometry be considered by rgbdslam?
Any pointers, links to other pages or code would be helpful.
Thanks for your time