ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Robot_localization + IMU + Camera Pose

asked 2015-09-03 03:52:28 -0600

Juliuss gravatar image

updated 2015-09-04 09:41:14 -0600

Hi all,

I'm trying to use this package with this two sensor, an IMU and a Camera that gives back the pose with respect to a marker, using the ar_sys package.

In my opinion, the first problem is with the frame.

I have map_frame(fixed) -> marker_frame (fixed) and than the camera_frame-->imu_frame

It is right that the map_frame is my map_frame, the odom_frame is my marker_frame and the base_link is my imu_frame (which is in the centre of the robot)?

If I understand right, the frame between marker and camera is provided by robot_localization. But if I don't create odom(marker)->base_link(imu) transform the EKF return an error (Could not obtain transform from marker1->imu )

The solution probably is in this sentence:

MAKE SURE something else is generating the odom->base_link transform. Note that this can even be another instance of robot_localization! However, that instance should not fuse the global data.

But how is possible that base_link has two parents?

Here the launch file: (fcu is my imu frame)

<launch>
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">
  <param name="frequency" value="25"/>
  <param name="sensor_timeout" value="0.1"/>
  <param name="two_d_mode" value="false"/>

  <param name="map_frame" value="map"/>
  <param name="odom_frame" value="marker"/>
  <param name="base_link_frame" value="fcu"/>
  <param name="world_frame" value="map"/>

  <param name="transform_time_offset" value="0.0"/>
  <param name="pose0" value="/poseCov"/>
  <param name="imu0" value="/mavros/imu/data"/>

  <rosparam param="pose0_config">[true,  true,  true,
                                  false, false, true,
                                  false, false, false,
                                  false, false, false,
                                  false, false, false]</rosparam>
  <rosparam param="imu0_config">[false, false, false,
                                 true,  true,  true,
                                 false, false, false,
                                 true,  true,  true,
                                 true,  true,  true]</rosparam>
  <param name="pose0_differential" value="false"/>
  <param name="imu0_differential" value="true"/>
  <param name="pose0_relative" value="false"/>
  <param name="imu0_relative" value="true"/>
  <param name="imu0_remove_gravitational_acceleration" value="true"/>
  <param name="print_diagnostics" value="true"/>
</node>

</launch>

Thx

edit retag flag offensive close merge delete

Comments

+1 for this as im trying to setup the same configuration. FYI, youll need to change the source to publish a PoseWithCovarianceStamped, and not a PoseStamped as a input for robot_localization. Im not too sure what the covariances should be, but in the process of trying to model the values dynamicaly.

l0g1x gravatar image l0g1x  ( 2015-09-03 21:06:12 -0600 )edit

Hi, I use a node that converts PoseStamped into PoseWithCovarianceStamped, I'm trying to estimate the covariance and then I will keep that matrix constant.

Juliuss gravatar image Juliuss  ( 2015-09-04 05:08:40 -0600 )edit

I'll look into this soon. I don't know anything about ar_sys, so I'll have to investigate. Looking at the configuration, I'd say you need something generating a marker->fcu transform.

Tom Moore gravatar image Tom Moore  ( 2015-09-04 08:04:36 -0600 )edit

Without using Kalman i can create the transform between camera and marker using the PoseStamped from ar_sys. So I have Map->marker->camera->fcu. (fcu is my imu frame, i forgot to say it). But if I create the transform, how can robot_localization overwrite it?

Juliuss gravatar image Juliuss  ( 2015-09-04 09:39:59 -0600 )edit

any update on this? Ive tried a different solution where i just use the ar_sys pose as a pose input into robot_localization but the pose data shows Z increasing when my robot gets farther away along the world x-axis. im not sure what to do at this point.

l0g1x gravatar image l0g1x  ( 2015-09-09 20:32:05 -0600 )edit

Just going to slip this reference in here incase you or someone decides to go with ekf_localization_node in @tmoore 's suggestion for Node 1, where you'll probably have imu as the only input to the ekf_node

l0g1x gravatar image l0g1x  ( 2015-09-11 11:08:15 -0600 )edit

2 Answers

Sort by ยป oldest newest most voted
3

answered 2015-09-10 19:10:26 -0600

Tom Moore gravatar image

MAKE SURE something else is generating the odom->base_link transform. Note that this can even be another instance of robot_localization! However, that instance should not fuse the global data.

But how is possible that base_link has two parents?

It doesn't have two parents in the tf tree. Read this and this (read the section called "Frame Authorities"). Here's how it works:

  • Node 1 (can be ekf_localization_node or something else, e.g. your robot's controller): produces odom->base_link transform, and publishes it to tf.
  • Node 2 (can be ekf_localization_node or something else, e.g., amcl): produces map->base_link transform, but does not publish it. Instead, it uses the inverse of the odom->base_link transform, along with its own map->base_link transform, to compute the map->odom transform, and then it publishes that.

In this way, you get map->odom->base_link.

Note, however, that you (a) don't have to use these frame_ids, and (b) you can define and publish all sorts of transforms from any of these, so long as you avoid re-parenting. For your IMU, you want to use static_transform_publisher (I'd use the tf2_ros version) and publish a transform from base_link->IMU. That transform should define how your IMU is mounted on your robot.

For ar_sys, it really depends on the coordinate frame in which the pose data is reported. As long as it adheres to ROS standards and is reported in a world-fixed frame (not attached to the robot), you can either (a) make the your world_frame in ekf_localization_node the same as the frame_id in the ar_sys pose message, or (b) create a static transform from the world_frame in your ekf_localization_node config to the frame_id in your ar_sys message.

edit flag offensive delete link more

Comments

thanks ill try these suggestions out later tonight or tomorrow and get back to you with the results i got

l0g1x gravatar image l0g1x  ( 2015-09-10 19:45:47 -0600 )edit

looks like ar_sys publishes the /pose msg with same frame_id as the image its analyzing on callback. Will this be a problem when trying to create a static transform from the ekf world_frame to this ar_sys frame_id if the camera publishing the image is a child of base_link?

l0g1x gravatar image l0g1x  ( 2015-09-11 12:40:44 -0600 )edit

Maybe this line from the ar_sys is something wrong mentioning: listens to the /camera_info and /image topics and detects the markers board set by its parameters, and then publishes the from-the-camera transform of the board detect so i believe it assume the marker moves, not the camera.

l0g1x gravatar image l0g1x  ( 2015-09-11 12:53:07 -0600 )edit

If the pose message is in the camera's frame_id, and that frame_id is a child of base_link, yes, that's a problem. You'll need to write a node that takes the transform from base_link to camera, applies it to the pose, and then re-issues the pose data with the world_frame as its frame_id.

Tom Moore gravatar image Tom Moore  ( 2015-09-11 13:05:29 -0600 )edit

okay will try this now

l0g1x gravatar image l0g1x  ( 2015-09-11 13:21:51 -0600 )edit

I think this works! I so far only applied the translational components offset to the ar_pose msg, but am stuck on converting between types of/applying the rotational part of the tf. Ill post a answer of my setup later today. My map frame (marker) does discretely jump a bit though FYI.

l0g1x gravatar image l0g1x  ( 2015-09-12 13:11:00 -0600 )edit

Sorry, what do you mean by the frame jumps? Does the pose produced by ar_sys jump? Does the transformed pose jump?

Tom Moore gravatar image Tom Moore  ( 2015-09-13 08:18:38 -0600 )edit

Sorry for the vagueness, the pose produced by ar_sys is subject to discrete jumps. I applied the base_link -> camera static transform to the [new] pose msg like you told me to (only translation). I dont publish any transforms, and let Node 2 take care of that lookup. Ill try to post a bag later tn.

l0g1x gravatar image l0g1x  ( 2015-09-13 14:27:23 -0600 )edit
1

answered 2015-09-14 10:47:58 -0600

l0g1x gravatar image

updated 2015-12-15 20:58:15 -0600

Going off of the solution that @tmoore gave, i wanted to share the entire setup that i have which is to big to fit in a comment. FYI, the code changes i made are most likely not the proper way of doing things, but the launch file configurations are good.

Here is a pastebin link to my launch file for Node 1

And another link to my launch file for Node 2

Im first trying to get this to work in a gazebo simulation before trying it on my actual robot. Theres a few things to note that i changed:

  • I changed the ar_sys source to instead publish a geometry_msgs/PoseWithCovarianceStamped instead of a geometry_msgs/Pose
  • I changed the ar_sys source to not broadcast the image frame (camera) -> marker transform, but instead lookup what the static transform is between the base_link and the camera frame on my robot and applied that static transform to the pose estimate that ar_sys gives, as @tmoore suggested in one of his recent comments. This way the ar_sys pose data was now being published relative to the base frame of the robot. So ar_sys does not publish any transform at all. You can reference @tmoore 's description for Node 2.

EDIT:

My robot's static transforms are defined like this (i tried using tf2, but i couldnt get the proper rotation of the camera's optical frame unless i used the original tf static publisher)

I no longer have a blackfly_mount_link -> blackfly_optical_link static transform as i instead created a static transform map -> board_marker with only a rotation, to put the ar_sys pose data in the orientation of the robot's base frame.

I only changed the second ekf launch file which you can view here

And you can view the code I use to apply to base_link->blackfly_mount_link transform to the ar_sys pose data here .

I deleted the old pictures as they where incorrect. Here are some new ones

Rviz

Gazebo

TF Tree

EDIT 2:

Now that I have the 2 EKF nodes running and all my tf configurations correct, my "map" frame is the fused data of EKF1 (only imu) output and the ar_sys poseWithCovarianceStamped. If i view the data in rviz with Fixed Frame set to "map", I do see the odometry move mostly correct (If need be I can explain why I say mostly, as it is something to do with the code modifications i made to ar_sys): if I move the robot in the +x direction, my EKF2 output is almost dead on, and the orientation is as well (i only have orientation data coming from the imu and not ar_sys). However, if I view the data in rviz with the Fixed Frame set to "odom", I do not see any translational motion, but only orientation data changing (the tail of the vector always remains at the origin). Ive been trying to figure out this for a while now, but this is where I am stuck on as I need translational ... (more)

edit flag offensive delete link more

Comments

Thanks. Only one more question: in the ar_sys code the pose was the position of the marker in camera frame (if we move the marker it was right), but having a fixed marker the traslation and the rotation along x axis are wrong. You modified the code also to resolve this problem?

Juliuss gravatar image Juliuss  ( 2015-09-15 04:51:29 -0600 )edit

So the translation that i added was only so that i see which direction the axis is point in rviz, so assume there is only a rotation. I modified the code directly in ar_sys to lookup the fixed static rotation tf and then applied that tf::Transform to the ar_pose data.

l0g1x gravatar image l0g1x  ( 2015-09-15 13:16:33 -0600 )edit

This then makes the ar_pose vector component (+Z) representing the perpendicular distance between the camera and the marker plane, become a translation +-X translation with respect to the robots base frame. This then states how far base_link moved away from the marker along the X axis

l0g1x gravatar image l0g1x  ( 2015-09-15 13:20:57 -0600 )edit

Question Tools

5 followers

Stats

Asked: 2015-09-03 03:52:28 -0600

Seen: 2,954 times

Last updated: Dec 15 '15