ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
The state estimation nodes in r_l
don't care about GPS data; the tutorials just use GPS data as a means of showing how to work with two state estimation node instances (one each for the map and odom frames) and provide insight into how to use navsat_transform_node
.
For your case, all you need to do is make sure that your IMU data and mocap data have the same coordinate frame, or that a transform exists between the two. The output of your mocap data just needs to be one of the supported message types for the state estimation nodes.
2 | No.2 Revision |
The state estimation nodes in r_l
don't care about GPS data; the tutorials just use GPS data as a means of showing how to work with two state estimation node instances (one each for the map and odom frames) and provide insight into how to use navsat_transform_node
.
For your case, all you need to do is make sure that your IMU data and mocap data have the same coordinate frame, or that a transform exists between the two. The output of your mocap data system just needs to be one of the supported message types for the state estimation nodes.
3 | No.3 Revision |
The state estimation nodes in r_l
don't care about GPS data; the tutorials just use GPS data as a means of showing how to work with two state estimation node instances (one each for the map and odom frames) and provide insight into how to use navsat_transform_node
.
For your case, all you need to do is make sure that your IMU data and mocap data have the same coordinate frame, or that a transform exists between the two. The output of your mocap system just needs to be one of the supported message types for the state estimation nodes.
EDIT in response to comment
The EKF and UKF state estimation nodes in r_l
are designed to fuse pose, velocity, and linear acceleration data from various sensor sources into a pose estimate. A typical use case involves fusing wheel encoder odometry data with IMU data. You can also fuse data from motion capture systems or localization packages. The coordinate frames in which said data should be reported are given in REP-105. You can change the names of those frames via the state estimation node parameters, but the principles remain the same. Additionally, you can define transforms from any coordinate frame into one of those principal frames if you need to.
A separate node exists in r_l
that converts actual latitude and longitude coordinates into the robot's world frame (i.e., one of the world frames in REP-105) so that it can be fused. This is navsat_transform_node
. It is meant to work with devices that produce a NavSatFix
message that contains actual latitude and longitude coordinates. This node is not required for operation of the state estimation nodes, and only serves to allow users to work with GPS devices. Furthermore, GPS is by no means required for the state estimation nodes.
In your case, your mocap system is producing X, Y coordinates. The message it generates likely has a header with a frame_id
. So, assuming your mocap system generates a PoseWithCovarianceStamped
message (note that it can also be an Odometry
message), you can just do this:
<param name="pose0" value="your/mocap_data/topic"/>
<rosparam param="pose0_config">[true, true, false, <!-- X and Y will be fused -->
false, false, false,
false, false, false,
false, false, false,
false, false, false]</rosparam>