Robotics StackExchange | Archived questions

Transformation between odom and camera_link

Hello there,

I'm a beginner with ROS and I'm trying to use a turtlebot3 with a kinect camera for slam. I replaced the lidar on the top of the bot with a kinectV1 camera. I have already installed all nodes required to publish rgb and depth images. I have also installed a node which is implementing slam called Orb2Slam.

The Slam system produces a pointcloud and a pose. The problem now is to link the odometry created by the turtlebot and the slam system.

First of all i defined a .urdf file to describe the robot and it's current configuration: The result can be viewed in rviz, looks great. The urdf file i used can be found here.

Rviz robot

The next picture is my tf tree and here is the mistake i assume. Orb2Slam publishes a pose for map but the transform between map and odom ist not possible so there is no link between the robot and the camera. The result of this problem can be seen in cartographer in this picture. In the moment data is beeing published by Orb2Slam the camera is in the wrong position and is moving without the robot.

How can i solve this problem ? I found this tool do i have perform the transformation manually ?

Additional: My architecture is rather complex. On the bot I'm running ROS1 melodic because of the kinect drivers. The remote system is running ROS2 dashing with a ros1_bridge. But this stuff is running fine so far.

UPDATE 1

I fixed the first issue i made a mistake publishing the robot model ... The Tree looks alot better now, thanks @Dragonslayer for responding.

Here is an actual tree picture. The problem remaining is the missing transform between this tree and map. How could this be done ?

Asked by Chris91 on 2020-05-01 11:32:12 UTC

Comments

Your urdf seems a little "special" to me. Specially as the tf tree doesnt show lots of links. I define links before I define joints, dont know if this makes a difference but doesnt make the urdf easier to read. It seems the tf tree misses all links/joints that are writen in this upside down way.

Asked by Dragonslayer on 2020-05-03 10:30:52 UTC

Yeah that's the basic configuration shipped with turtlebot3 i just kicked out the laser_scan and added the camera_link.

Asked by Chris91 on 2020-05-03 11:01:16 UTC

With the laser_scan it all worked? What nodes run on which computer/ROS1 vs ROS2? Are both systems publishing tf2, or might melodic be tf? As in the idea that one system builds a tf1 tree and the other a tf2 tree?

Asked by Dragonslayer on 2020-05-04 10:12:19 UTC

I updated the issue :) Thanks Dragonslayer for your help so far.

Asked by Chris91 on 2020-05-04 11:36:44 UTC

Your welcome. OrbSlam is now publishing a transform from base_link to camera but should from map to camera, or what I would also try to odom and then maybe tf takes care of transformint up the tree to camera Link. Orbslam takes in two frame_id's did you change them?

Asked by Dragonslayer on 2020-05-04 13:05:42 UTC

I will take a closer look this evening but I think OrbSlam is not publishing a transform on purpose. I took a look at the code of orbslam. It seems it only publishes on camera_link ignoring all other messages on /tf.

Asked by Chris91 on 2020-05-05 03:57:50 UTC

I read through the orb_slam2 package description and Iam not sure if this all is actually ment to work. This is because orb doesnt seem to take in odometry, thus Iam not confident that it "knows" how to handle the odom frame at all, it seems to be more made for handheld applications. tf has limitations and I think it must transform from the buttom up. map -> odom -> base_link etc. In this case it seems odom would be corrected down from camera_link, but would then also transform map. Did you ever consider rtabmap for the task? EDIT: All I saw with orb_slam was actually in camera_frame. only thing that would "work" seems to be map -> camera_link -> base_link etc. How the navigation stack will handle this I dont know but think it will be more of a problem then just go with another slam package. In my opinion. All the best. Matt

Asked by Dragonslayer on 2020-05-05 09:31:32 UTC

Hey thank you this node looks much better than the one I'm using. I will try it out and give feedback if it works better than orbslam. :)

Asked by Chris91 on 2020-05-06 03:13:36 UTC

Good link to start with rtabmap link text - Its less of a plug and play package as orb seems to be, but there is lots of information out there. Maybe looking for a launchfile dedicated to your rgbd camera on github might be a good start for parameter tuning. Its made for exactly your purpose. Looking forward to your feedback.

Asked by Dragonslayer on 2020-05-06 09:31:33 UTC

By the way. Did you know there is a pointcloud_to_laserscan link text node? Using this node you might get the turtlebot up and running with the laserscan based navigation already installed. Your post doesnt make it clear if you actually want 3d source based slam or you just dont want to buy a lidar. And there is also a depthimage_to_laserscan node link text if that might come in handy on your setup. All the best.

Asked by Dragonslayer on 2020-05-06 09:37:00 UTC

Got it working with RTabMap. Will try optimizing it to fit my problem better. I also found a better way for cartographer by exporting the point cloud from the freenect driver directly into cartographer. The freenect driver is publishing a pointcloud on /camera/depth_registered/points. Thanks again @Dragonslayer for your help.

Asked by Chris91 on 2020-05-12 06:50:55 UTC

You are welcome, glad I could help, all the best for the rest of your project.

Asked by Dragonslayer on 2020-05-15 14:40:34 UTC

Answers