Fusing 2 sonars / pingers as continous sensors or 'GPS'
Picking up where I left ekf_localization-node-not-responding, I am trying to fuse position in x and y from 2 sonars (Tritech Micron MK III) with (double integrated accelerometer) position from an IMU (xsens MTi 30).
This happens inside a rectangular pool, so each sonar
- will track a side wall and measure the absolute distance to it (one sonar looking at the short wall, the other at the long wall),
- we have a distance and a heading attached to it
- and report x and/or y position either to use as
- absolute position with x axis || long side of pool, y || short side of pool
- or relative position to the starting point (differential_integration == true)
(The point being trying to get a quicker update rate compared to doing machine vision on a full scan)
The problem now is: How can I account for the sonar's offset from base_link since the measurements aren't really in the body frame?
I could either
- report the data with frame_id
odom
ormap
, but how could I account for the mounting offset especially when the sub is not aligned with the side walls.- would that be possible with a single instance of robot_localization, or should then tread them as 'GPS'?
- report the data in a
Sonar1
andSonar2
frame and use the current heading of the sonar head (plus the fixed translation) to publish a transform fromSonar1
tobase_link
- This means that I will have to use the attitude solution to track the wall, but that's outside the scope of robot_localization.
I believe the latter is the better approach. Does this make sense or how could it be improved?
Thanks,
Rapha
Wrap up on how to fuse the sonars
I am still working through implementing some of these steps, but I will document the process here.
What are the frames and transforms needed for fusing the sonars:
- Frames:
odom
pool
sonar1
(will be fused for x only)sonar2
(will be fused for y only)base_link
imu
- Transforms:
odom->base_link
(generated by ekf_localization)odom->pool
(static transform, accounting for rotational offset of pool walls/axes with ENU)pool->sonar1
(broadcasts a translation of the sonar mounting position relative tobase_link
rotated/trigonometried by the yaw angle ofpool->base_link
transform)pool->sonar2
base_link->imu
- Data:
- /sonar/position/x in frame
sonar1
- /sonar/position/y in frame
sonar2
- /imu/data in frame
imu
- /svs/depth in frame
svs
- /sonar/position/x in frame
WORK IN PROGRESS
EDIT (regarding 1st edit in Answer):
I have set it up the way you recommended and I publish a transform from odom-->sonar
every time I publish a new sonar reading (just before).
The rotational part of that transform uses the current attitude as published by the robot_localization node.
Robot_localization sees the following input at the moment:
- attitude from the IMU (imu/data)
- position from the sonars (sonar/position0 and sonar/position1)
However, after a few seconds the robot_localization node slows down to about 1 update every 5 seconds... EDIT: By 'slows down' I mean ...
Any chance you can record a bag file without running robot_localization? I need the
tf
tree to not have the odom->base_link transform in it.No problem: 2015-01-20-19-10-36.bag
Most of the dynamic and static transforms (intentionally) have got a translation of (0,0,0) for the time being
Re: edit 4, it sounds like it's going to be a covariance issue. I'll check it out. Any reason you're using the differential setting?
Sorry, when you say that X should not have changed when surfacing, do you mean the raw data should not have changed, or that you think the EKF jumped? In the bag file, the
/sonar/positionBMT
topic changes from ~4 meters to ~2 meters at around 34 seconds into the bag file.Sorry, that was a little unclear. It is a sonar processing artifact.The sonar has a vertical beamwidth of 30° and unless the robot is perfectly horizontal it will get a reflection of the water surface if the sub is not fully submerged.