# cannot transform laser using laser_assembler

I've successfully used laser_assembler to convert laser scans to pointclouds. However, when viewing the concatenated clouds in rviz they are not de-rotated (projected) accordingly with the fixed frame I've chosen. Pictures:

initially (laser is started):

After moving the LiDAR around:

Even though octomap progresses correctly in terms of absolute position (x,y,z), it strikes out when it comes to the angles.

Launch file:

<launch>

<node pkg="laser_assembler" type="laser_scan_assembler" output="screen"  name="laser_scan_assembler">
<param name="tf_cache_time_secs" type="double" value="10.0" />
<param name="max_scans" type="int" value="400" />
<param name="ignore_laser_skew" type="bool" value="true" />
</node>

<node pkg="laser_assembler" type="periodic_snapshotter" output="screen"  name="periodic_snapshotter"/>

<node pkg="IMUEstimate" type="IMUEstimate" name="IMUEstimate" />

<node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
<param name="resolution" value="0.05" />
<param name="latch" value="false" />

<!-- fixed map frame (set to 'map' if SLAM or localization running!) -->
<param name="frame_id" type="string" value="map" />

<!-- maximum range to integrate (speedup!) -->
<param name="sensor_model/max_range" value="10.0" />

<!-- data source to integrate (PointCloud2) -->
<remap from="cloud_in" to="/assembled_cloud2_out" />

</node>

</launch>


I've tried every possible combination of fixed_frame and frame_id for laser_assembler and octomap respectively, but the result is always the same, either I get a 2D or a 3D octomap that gets rotated by the laser's unprojected frame. I've even pulled in pcl_ros and modified the periodic_snapshotter to transform the resulting pointcloud but no cigar. Here is the code for your review. I tried all combinations possible for the transforms so it's nothing about that (the rest of periodict_snapshotter.cpp is the same):

    // Make the service call
if (client_.call(srv)) {
tf::StampedTransform localTransform;

try {
listener_.lookupTransform("scanmatcher_frame", "map",
ros::Time(0), localTransform);
}
catch (tf::TransformException &ex)
// If the tf listener cannot find the transform, print an error and continue
{
ROS_ERROR("Error: %s", ex.what());
}

// Transform the point cloud from the map_world frame into the base_footprint frame
sensor_msgs::convertPointCloudToPointCloud2(srv.response.cloud, initialCloud2);
pcl_ros::transformPointCloud("map", localTransform, initialCloud2,
transformedCloud2);
ROS_INFO("Published Cloud with %u points",
(uint32_t)(srv.response.cloud.points.size()));
pub_.publish(transformedCloud2);
}


Any ideas? I feel like I'm super closed but I just don't know what's going on.

UPDATE, here is my tf tree:

edit retag close merge delete

1

Do you have tf set up correctly? I.e. is the laser moved with respect to your fixed_frame (which is base_link) and you know and publish via tf how this is moved. Otherwise, the laser_assembler does not know how to put the single readings together...

( 2018-01-11 01:17:16 -0600 )edit

Yes, I have a static tf publisher from base link to laser. I’ll post the tf tree in a bit so you could take a look for yourself. I would assume this is correct, right?

( 2018-01-11 11:43:47 -0600 )edit

just to clarify some more questions:

1. the laser is not moving w.r.t the robot/base_link!
2. you are moving the robot around!
3. is base_footprint moving w.r.t base_link? Typically, this should be fixed as well
4. does scan_matcher_frame w.r.t base_footprint move correctly?
( 2018-01-12 00:59:46 -0600 )edit

Yup, the laser is not moving w.r.t the base_link, and I am moving the robot around. Base_footprint is fixed w.r.t to base_link, but it does not have any orientation transforms applied to it. The scanmatcher_frame w.r.t base_footprint moves correctly.

( 2018-01-12 01:20:45 -0600 )edit

I feel like the community could have a use for this , but if you don't know it's ok, I'll probably just try something else.

( 2018-01-12 01:21:47 -0600 )edit

Sort by » oldest newest most voted

Alright, so this is the issue. The fixed_frame is the frame in which all readings are put together by the laser_assembler. So if you move your robot, collect data, and put them together into a single(!) Point cloud, the fixed_frame needs to be fixed in the map. I.e. you could either use the odometry or the localization frame. But note that in both cases you'll end up with errors.

The typical use case for the laser assembler is to e.g. tilt a laser on the robot platform (while this is not moving) and assemble the scans into a 3D sweep of the Environment.

more

Ah, I guess it's just not possible with an integrated IMU. Thank you for sticking out for me, I appreciate it :)

( 2018-01-12 22:53:02 -0600 )edit