# How do I get the transformed point cloud from in callback? I'm not getting a rotated cloud in my CB

Hello everyone,

I have a hardware setup which consists of a Keyence SZ-V LiDAR and a Razor 9-DOF IMU. There is no real robot, I just have the two sensors somewhat attached so that I can move them at the same time with one motion. I wrote my own first laser driver node to poll the laser and produce a sensor_msgs::LaserScan with the frame_id as "base_link". No issue there. I then have a second node which uses message_filters::Synchronizer to tf transform the LaserScan into a 3D sensor_msgs::PointCloud2 that is rotated by the IMU orientation with frame_id "laser_link". No issue there. I then have a third node which listens for these sensor_msgs::PointCloud2, converts them to pcl::PointCloud2, adds 20 of them, converts them back to sensor_msgs::PointCloud2 and then publishes that larger point cloud.

Here is the issue: because the rotation is caused by a tf transform, the rotation is not seen in the combined cloud when I move the IMU. If I turn the laser alone, the combined cloud is as expected and shows some old points as well as the changed ones (not rotated). If I turn the IMU alone, the rotation is seen in the second node (Laser+IMU) but in the third node I just see a something that looks like my 20 scans stacked on top of each other, but rotated by the IMU orientation. I would expect to see several clouds at intermittent positions between the start and end rotation positions.

I have tried changing the frame_id of this combined cloud to both "base_link" and "laser link" to see if I could grab the rotated cloud when combining the clouds, but neither of these worked. I mean, this error makes sense because I'm never really getting the rotated values into my third node. How do I correctly get these transformed clouds into my third node so that I can collect several clouds together whether the changes are seen in either the LaserScan or IMU?

EDIT: Better late than never.. my objective with this combined 3D point cloud is to get a 3D SLAM going. I'm aiming for ethzasl_icp_mapper as my SLAM algo but it seems that it needs more than just single scans as an input. That's why I'm trying to produce a larger cloud. I want to give it more points to extract data from. I've also looked at loam_back_and_forth but since it's now commercial that's out. I've also looked at BLAM! but there was so many issues trying to get it and it's dependencies installed that I gave up. I forgot the exact errors there....

edit retag close merge delete

Sort by » oldest newest most voted

Although I appreciate the suggestions from locasw, I have solved this issue with just one line of code:

pcl_ros::transformPointCloud(...)


This allows me to perform the transformations and live build the combined cloud. Then, I can create a counter which will allow me to publish the combined could every 20 iterations of my callback.The produced combined cloud is as expected and produces a cloud which mimics kinect clouds.

more

Hey! Can you share what you have done to transform the PCL correctly? I'm modifying the periodic_snapshotter node from laser_assembler but doing pcl_ros::transformPointCloud does not seem to have any effect. Any ideas? more details: https://answers.ros.org/question/2795...

( 2018-01-11 01:28:19 -0600 )edit

Are you trying to rotate a laser sensor which produces a single line/plane of 3D points to build up a cloud that will appear to downstream nodes as if it were captured instantaneously (because those nodes are meant for Kinect-like data)?

I then have a third node which listens for these sensor_msgs::PointCloud2, converts them to pcl::PointCloud2, adds 20 of them, converts them back to sensor_msgs::PointCloud2 and then publishes that larger point cloud.

What you would want to do is have this node iterate through each point cloud and transform it to be relative to where base_link is currently given each happened a different amount of time in the past, which works as long as you have good tf history of where base_link used to be (probably using odometry, since the slam isn't available yet).

It would be nice if there was a tf lookup like lookupPastTransform("map", "base_link", time1, time2) which would return the transform of base_link at time1 to base_link at time2 relative to the map frame. (TODO make one if there isn't) You would have to lookup the base_link transform for the past and present timestamps then get the transform between those two.

A side note is that you can subscribe to sensor_msgs::PointCloud2 with a pcl::PointCloud callback, and publish out a pcl::PointCloud that is automatically converted back, you don't have to convert it yourself http://wiki.ros.org/pcl/Overview#Subs... .

more

Thank you for your answer. to answer for first question: yes. Should I just do the cloud accumulation in my second node and only publish after 20 or so rotated clouds are captured realtime with imu data? the whole TF thing still isnt entirely clear to me although I have completed all the tutorials

( 2017-06-09 11:54:05 -0600 )edit

either way your answer sounds like the "correct" way to get it done so that I can have several independently operating nodes instead of just cramming stuff together to try to get it done, so i will investigate your suggestion more

( 2017-06-09 11:55:31 -0600 )edit
1

http://wiki.ros.org/tf/Tutorials/Time... Check Part 2. Is this what you are suggesting?

( 2017-06-09 13:48:00 -0600 )edit

Cool, I must have overlooked that before

listener.lookupTransform("base_link", now,
"map", transform);


(or possibly reverse the now and past timestamp)

( 2017-06-09 14:01:42 -0600 )edit

This issue is now resolved. Thank you for your assistance. The suggestions you gave me were extremely helpful for me to reach the solution.

( 2017-06-12 15:52:18 -0600 )edit