ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

robot_localization with mocap data as alternative to GPS/NavSatFix?

asked 2016-03-02 13:27:28 -0500

codenotes gravatar image

If one has a mocap system that is giving reasonable x,y positioning coordinates but one still wants/needs to fuse that with an IMU with robot_localization, it seems I would be forced to turn my X,Y coordinates from the mocap into NavSatFix messages, (ie, GPS coordinates) such that they can be fed to robot_localization. I could do this via UTM, but this seems really inefficient.

(While one might think the mocap itself would be enough to localize, some mocap systems still jitter somewhat and can have some big outlying readings, particularly when the robot gets close to the boundaries of the system. Fusing IMU data with would be an ideal good solution.)

I'd like to be able to feed robot_localization these mocap X,Y coordinates in liu of NavSatFix messages.

Is there a recommended approach for this?

Best, Gregory

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2016-03-03 19:43:53 -0500

Tom Moore gravatar image

updated 2016-03-04 09:49:07 -0500

The state estimation nodes in r_l don't care about GPS data; the tutorials just use GPS data as a means of showing how to work with two state estimation node instances (one each for the map and odom frames) and provide insight into how to use navsat_transform_node.

For your case, all you need to do is make sure that your IMU data and mocap data have the same coordinate frame, or that a transform exists between the two. The output of your mocap system just needs to be one of the supported message types for the state estimation nodes.

EDIT in response to comment

The EKF and UKF state estimation nodes in r_l are designed to fuse pose, velocity, and linear acceleration data from various sensor sources into a pose estimate. A typical use case involves fusing wheel encoder odometry data with IMU data. You can also fuse data from motion capture systems or localization packages. The coordinate frames in which said data should be reported are given in REP-105. You can change the names of those frames via the state estimation node parameters, but the principles remain the same. Additionally, you can define transforms from any coordinate frame into one of those principal frames if you need to.

A separate node exists in r_l that converts actual latitude and longitude coordinates into the robot's world frame (i.e., one of the world frames in REP-105) so that it can be fused. This is navsat_transform_node. It is meant to work with devices that produce a NavSatFix message that contains actual latitude and longitude coordinates. This node is not required for operation of the state estimation nodes, and only serves to allow users to work with GPS devices. Furthermore, GPS is by no means required for the state estimation nodes.

In your case, your mocap system is producing X, Y coordinates. The message it generates likely has a header with a frame_id. So, assuming your mocap system generates a PoseWithCovarianceStamped message (note that it can also be an Odometry message), you can just do this:

<param name="pose0" value="your/mocap_data/topic"/>

<rosparam param="pose0_config">[true,  true,  false,  <!-- X and Y will be fused -->
                                false, false, false,
                                false, false, false,
                                false, false, false,
                                false, false, false]</rosparam>
edit flag offensive delete link more


ok, so navsat_transform_node's job is to take gps and turn into a message the state estimators can consume, right? So I can bypass it then. Makes sense. Now the mocap data is absolute position information, should I be thinking about ways to make sure it is more weighted than other sensor data?

codenotes gravatar image codenotes  ( 2016-03-04 09:12:39 -0500 )edit

Ah, your comment changed. You can control that weighting using the covariance matrices in the sensor data messages. Kalman filtering is just optimal weighted averaging, so it really just comes down to tuning those covariance matrices.

Tom Moore gravatar image Tom Moore  ( 2016-03-04 09:49:59 -0500 )edit

Yeah, I did change my comment as I thought more about what you had written and understood it better. But that previous comment to which you responded had to do with me not quite understanding where GPS fit into state estimation...I thought it would just be another sensor stream, but it isn't

codenotes gravatar image codenotes  ( 2016-03-04 10:07:33 -0500 )edit

Question Tools



Asked: 2016-03-02 13:27:28 -0500

Seen: 447 times

Last updated: Mar 04 '16