Use kinect for generating static maps in addition to the laser.
I have both a kinect and a lidar as observation sources for the common costmap params separated in different layers. I use them both for mapping. Now my problem is that the lidar information is added to the map generated but the kinect information, though it is inflated correctly for obstacle avoidance, is not. After several tests I've come to the conclusion that the reason the map is automatically updated by the laser is because the laser publishes in the scan topic (I'm not sure if that's right). I'd like the kinect footprint also to be added to the map to apply an algorithm later. Can someone help me achieve this? Thanks in advance.
map_type: voxel
origin_z: 0.0
z_resolution: 1
z_voxels: 2
obstacle_range: 2.5
raytrace_range: 3.0
publish_voxel_map: true #false
transform_tolerance: 0.5
meter_scoring: true
#footprint: [[0.15, 0.15], [0.15, -0.15], [-0.15, -0.15], [-0.15, 0.15]]
robot_radius: 0.3
footprint_padding: 0.05
plugins:
- { name: static_layer, type: "costmap_2d::StaticLayer" }
- { name: obstacles_layer, type: "costmap_2d::ObstacleLayer" }
- { name: voxel_layer, type: "costmap_2d::VoxelLayer" }
- { name: inflater_layer, type: "costmap_2d::InflationLayer" }
#inflation_radius: 0.55 #0.1
cost_scaling_factor: 5.0 #2.2
track_unknown_space: true
obstacles_layer:
observation_sources: laser_scan_sensor
laser_scan_sensor:
sensor_frame: laser_range_sensor
data_type: LaserScan
topic: scan
marking: true
clearing: true #default to true
voxel_layer:
enabled: true
origin_z: 0
z_resolution: 0.05
publish_voxel_map: true
combination_method: 1
z_voxels: 10
unknown_threshold: 2
observation_sources: kinect_camera
kinect_camera:
sensor_frame: gz_kinect_link
data_type: PointCloud2
topic: cloud_in
marking: true
clearing: true
min_obstacle_height: 0.1
max_obstacle_height: 2.0
edit:
kinect_link [height=0.5,
label=kinect_link,
pos="5600.3,780",
shape=ellipse,
width=1.3902];
base_link -> kinect_link [label="Broadcaster: /robot_state_publisher\nAverage rate: 10000.0\nBuffer length: 0.0\nMost recent transform: 0.0\nOldest transform: 0.0",
lp="5709.8,843.5",
penwidth=1,
pos="e,5596.5,797.98 5988,905.75 5883,902.6 5626.8,893.65 5613.3,881 5593.9,862.85 5592.8,831.23 5595.2,808.19"];
gz_kinect_link [height=0.5,
label=gz_kinect_link,
pos="5600.3,653",
shape=ellipse,
width=1.7512];
kinect_link -> gz_kinect_link [label="Broadcaster: /kinect_broadcaster\nAverage rate: 10000.0\nBuffer length: 0.0\nMost recent transform: 0.0\nOldest transform: 0.0",
lp="5689.3,716.5",
penwidth=1,
pos="e,5600.3,671.16 5600.3,761.8 5600.3,740.77 5600.3,705.91 5600.3,681.32"];
which frame are the pointcloud message from the kinect in and what does your TF tree look like? Could you post your tree here? (You can use ros-blessed to get an ascii representation of your TF tree)
I'm not sure i know what you mean with the frame of the pointcloud message. I couldn't upload the picture of the tf tree because i need 5 points to upload files so i pasted the .dot in the question don't know if it'll help you either. I've been doing some research and I've seen that mapping with this arrange is usually done using rtabmap but well the robot is going to be a pi4 and i'm guessing gmapping is less memory intensive and i really only want a 2d map.
Do a
rostopic echo -n 1 cloud_in
(assumingcloud_in
is really your pointcloud topic). The header in that message will tell you the frame. The dot is OK, but hard to read. If you use ros-blessed you get a text/ascii representation of the tree that is easy to read by humans.