EKF with ZED
I want to fuse the ZED pose with an external GPS. For a start, I am just trying to pass the ZED signal through an EKF (robot_localization) node (I expect to see the same behavior as without the EKF). Therefore, I am disabling "publish_map_tf" (map -> odom), and run an EKF node that subscribes to the pose topic. Although I expect to see no change in behavior, eventually my base_link drifts, as if it has velocity even when I stay in the same place.
my ZED2 parameters are:
# params/common.yaml
# Common parameters to Stereolabs ZED and ZED mini cameras
---
# Dynamic parameters cannot have a namespace
brightness: 4 # Dynamic
contrast: 4 # Dynamic
hue: 0 # Dynamic
saturation: 4 # Dynamic
sharpness: 4 # Dynamic
gamma: 8 # Dynamic - Requires SDK >=v3.1
auto_exposure_gain: true # Dynamic
gain: 100 # Dynamic - works only if `auto_exposure_gain` is false
exposure: 100 # Dynamic - works only if `auto_exposure_gain` is false
auto_whitebalance: true # Dynamic
whitebalance_temperature: 42 # Dynamic - works only if `auto_whitebalance` is false
depth_confidence: 50 # Dynamic
depth_texture_conf: 100 # Dynamic
pub_frame_rate: 15.0 # Dynamic - frequency of publishing of video and depth data
point_cloud_freq: 30.0 #15 # Dynamic - frequency of the pointcloud publishing (equal or less to `grab_frame_rate` value)
general:
camera_name: zed # A name for the camera (can be different from camera model and node name and can be overwritten by the launch file)
zed_id: 0
serial_number: 0
resolution: 3 # '0': HD2K, '1': HD1080, '2': HD720, '3': VGA
grab_frame_rate: 30 # Frequency of frame grabbing for internal SDK operations
gpu_id: -1
base_frame: 'base_link' # must be equal to the frame_id used in the URDF file
verbose: false # Enable info message by the ZED SDK
svo_compression: 2 # `0`: LOSSLESS, `1`: AVCHD, `2`: HEVC
self_calib: true # enable/disable self calibration at starting
camera_flip: false
video:
img_downsample_factor: 1.0 # Resample factor for images [0.01,1.0] The SDK works with native image sizes, but publishes rescaled image.
extrinsic_in_camera_frame: true # if `false` extrinsic parameter in `camera_info` will use ROS native frame (X FORWARD, Z UP) instead of the camera frame (Z FORWARD, Y DOWN) [`true` use old behavior as for version < v3.1]
depth:
quality: 1 # '0': NONE, '1': PERFORMANCE, '2': QUALITY, '3': ULTRA
sensing_mode: 0 # '0': STANDARD, '1': FILL (not use FILL for robotic applications)
depth_stabilization: 0 # `0`: disabled, `1`: enabled
openni_depth_mode: false # 'false': 32bit float meters, 'true': 16bit uchar millimeters
depth_downsample_factor: 1.0 # Resample factor for depth data matrices [0.01,1.0] The SDK works with native data sizes, but publishes rescaled matrices (depth map, point cloud, ...)
pos_tracking:
pos_tracking_enabled: true # True to enable positional tracking from start
publish_tf: true # publish `odom -> base_link` TF
publish_map_tf: false # publish `map -> odom` TF
map_frame: 'map'
odometry_frame: 'odom'
area_memory_db_path: ''
area_memory: true # Enable to detect loop closure
floor_alignment: false # Enable to automatically calculate camera/floor offset
initial_base_pose: [0.0,0.0,0.0, 0.0,0.0,0.0] # Initial position of the `base_frame` -> [X, Y, Z, R, P, Y]
init_odom_with_first_valid_pose: true # Enable to initialize the odometry with the first valid pose
path_pub_rate: 2.0 # Camera trajectory ...