ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

odom frame in navsat_transform_node in case of quasi mobile platform

asked 2019-06-14 11:59:23 -0500

akosodry gravatar image

updated 2019-06-21 07:37:39 -0500

Hi,

(I read the REP105 multiple times, but could not figure out in the following particular case what is the role of the frames and how to set them up)

I have a robotic arm that is going to be attached to a mobile platform,

but this mobile platform is controlled independently, so 1) i cant influence its position and 2) i don't know basically the position of this mobile platform (no encoder feedback what mobile robots have).

I want to use GPS+IMU in order determine the pose of the base_link frame of my robotic arm. I'm learning about the robot_localization package + navsat_transform_node but i dont really understand what should be the odom frame in my case?

Currently i have only the following frames:

world (which is just an dummy/empty link)
  base link
    link_1
      link_2, etc..

I don't have any /odom topic nor odom frame like mobile robots do.

Can you please explain in this particular case how should i extend the urdf in order to have an odom frame/topic that is required by the navsat_transform_node?

EDIT:

The mobile platform rarely moves, e.g., it moves 5 meters east then it stops. I'm not interested in the instantaneous maneuvers/spatial trajectories during the maneuver from A to B point. I just want to determine what is the pose of the base_link of the robot arm when the maneuver has been finished, i.e., when the mobile platform stopped/arrived in point B.

Thank you in advance.

edit retag flag offensive close merge delete

Comments

1

I can't help you with the urdf, but the odom frame between the map (or world) and the base_link frame gives you both a continuous (but drifting) odom->base_link position (important for small local tasks, e.g. picking an object) and a non-drifting (but also jumping) map->base_link position (for long-distance navigation). I struggled with that too when I started using robot_localization, so I wrote a small introduction/tutorial on our company's website which explains these basics. If you understand these basics, all other robot_localization tutorials become clear.

Maarten gravatar image Maarten  ( 2019-06-20 02:39:49 -0500 )edit

@Maarten Thank you for the introduction, im going to read it after work. I had a feeling that my question was not well described,ie, not clear, so i made a new question yesterday from a different perspective. Can you have a look on that question, maybe that explains better what i am asking? Here is the question link . Thank you in advance.

akosodry gravatar image akosodry  ( 2019-06-20 03:17:03 -0500 )edit
1

If I understand it right, you have a mobile base that drives around randomly (well, not randomly, but you don't know anything about it) with a robotic arm that needs to do some operations? Which kind of operations do you need to do, do you need to know where you are (globally), or do the arm and the base work together in a local setting? You added a GPS and an IMU to the robotic arm, to estimate the position of the mobile base, but if you want an accurate estimate, this is probably not sufficient: an IMU isn't accurate enough to integrate its acceleration for more than a fraction of a second, leaving you more or less with the GPS accuracy and the orientation measurement of you IMU. If you can't use the robotics base's encoder, I would suggest using visual odometry (e.g. using ...(more)

Maarten gravatar image Maarten  ( 2019-06-20 06:23:48 -0500 )edit
1

If you want to estimate both the map->odom and the odom->base_link transformations, you need two robot_localization nodes. One which uses all continuous sensors which gives a continuous (but drifting) odom->base_link transform and one which uses all sensors which gives the map->odom transformation (so that map->base_link gives you an absolute (but sometimes non-continuous) estimate). Think about the odom frame as a frame in between the map and base_link which models the discontinuous jumps in the estimate (e.g. due to gps updates). This way, the odom->base_link transformation is continuous, which is important for local operations (e.g. if you need to move something 2cm in a given direction, you don't mind that your position could be drifting over longer periods or distances, but discontinuities (jumps) in the position estimate would make that simple taks impossible). Depending on you application, you may or may not need both estimates.

Maarten gravatar image Maarten  ( 2019-06-20 06:32:57 -0500 )edit

Hi @Maarten, sorry for the late response. I read your tutorial, but still i find difficult to adapt it to my particular case. To answer your questions: 1) Yes, you understood it right, a mobile base that drives around randomly with a robotic arm that needs to do some maneuvers. These maneuvers are executed with moveit, and work well in static situation ie., so far it was assumed that there is no mobile base. 2) Yes, the arm and the base work together in a local setting, which means that i just want to know how much did the mobile base move from its initial spatial position. 3) Yes, i added a GPS and an IMU to the robotic arm, to estimate the position of the mobile base. Im using the ublox C94 m8p GPS which can provide accurate position values up to cm accuracy. I would assume that should ...(more)

akosodry gravatar image akosodry  ( 2019-06-21 07:20:08 -0500 )edit

@Maarten "If you can't use the robotics base's encoder, I would suggest using visual odometry" How can visual odometry help in determining the pose of the mobile base? I assume RGBD provides some spatial information that can be used, but is it enough?

The only continuous sensor i have on the mobile base is the IMU. Yea it is quite modest setup. But it should be enough for orientation, i would guess. Since the mobile base rarely moves, ie., it moves lets say 5meters east and stops for 15 minutes, and for me its enough to determine just the static position of the mobile base. Im not interested in the instantaneous maneuvers/trajectories and so on, i just want to know, that when the mobile base stops then what is the pose relative from the initial pose.

So i think that GPS updates should be enough to determine ...(more)

akosodry gravatar image akosodry  ( 2019-06-21 07:29:47 -0500 )edit
1

I think you only need some filtering on the GPS position and the magnetometer orientation (I don't think the acceleration and gyroscope outputs of your IMU give that much useful information in your case). I would suggest to use a robot_localization node for the map->odom transform (inputs: GPS and IMU; maybe disable the IMU's gyroscope and accelerometer). I've never tried it, but I suppose this should work if you launch a static_transform_publisher node (all zeros, no translation or rotation) for the odom to base_link transform (in general, this transform is published by another robot_localization node which gets only the continuous inputs such as accelerometer, gyroscope and encoder data, but an accelerometer without an encoder is quite useless in practice, so I would try to replace this transform by an all-zero static transform).

Maarten gravatar image Maarten  ( 2019-06-22 14:40:01 -0500 )edit

@Maarten thanks for the help, im working on it. Could you post your comments in an answer? I would accept it, since all your comments were constructive and addressed multiple cases that could help others.

akosodry gravatar image akosodry  ( 2019-06-23 03:04:43 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2019-06-23 06:35:49 -0500

As per @akosodry 's request, a summary of our comments discussion as an answer:

I can't help you with the urdf, but the odom frame between the map (or world) and the base_link frame gives you both a continuous (but probably drifting) odom->base_link position (important for small local tasks, e.g. picking an object you're tracking) and a non-drifting (but probably jumping) map->base_link position (important for long-distance navigation). I struggled with that too when I started using robot_localization, so I wrote a small introduction/tutorial on our company's website which explains these basics. If you understand these basics, most other robot_localization tutorials become clear (at least for me).

If you want to estimate both the map->odom and the odom->base_link transformations, you need two robot_localization nodes. One which uses all continuous sensors which gives a continuous (but probably drifting) odom->base_link transform and one which uses all sensors (including the non-continuous ones, e.g. GPS) which gives the map->odom transformation (so that map->base_link gives you an absolute (but sometimes non-continuous) estimate). Think about the odom frame as a frame in between the map and base_link which models the discontinuous jumps in the estimate (e.g. due to gps updates). This way, the odom->base_link transformation is continuous, which is important for local operations (e.g. if you need to move something 2cm in a given direction, you don't mind that your position could be drifting over longer periods or distances, but discontinuities (jumps) in the position estimate would make that simple taks impossible). Depending on you application, you may or may not need both estimates.

In your case (only need for different static positions, no need for exact positioning while moving and no encoder information), I think you only need some filtering on the GPS position and the magnetometer orientation (I don't think the acceleration and gyroscope outputs of your IMU give that much useful information in your case). I would suggest to use a robot_localization node for the map->odom transform (inputs: GPS and IMU; maybe disable the IMU's gyroscope and accelerometer). I've never tried it, but I suppose this should work if you launch a static_transform_publisher node (all zeros, no translation or rotation) for the odom to base_link transform (in general, this transform is published by another robot_localization node which gets only the continuous inputs such as accelerometer, gyroscope and encoder data, but an accelerometer without an encoder is quite useless in practice, so I would try to replace this transform by an all-zero static transform).

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2019-06-14 11:59:23 -0500

Seen: 253 times

Last updated: Jun 23 '19