ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A
Ask Your Question

pointcloud_to_laserscan indigo setup problem

asked 2015-05-27 04:05:18 -0500

Willi gravatar image


I am running ROS Indigo on Ubuntu 14.04. My setup uses 1 Asus Xtion Pro with Openni2. I am trying to use the sample_node.launch file that comes by default with the pointcloud_to_laserscan package. Where the Point Cloud 2 input is "/camera/depth_registered/points", and Laser Scan output is "/camera/scan".

On RVIZ the laserscan output doesn't wrap what is projected by the Point Cloud. It rather shows a random irrelevant laser scan.

I tried to read the debug messages from "rqt", and they are all related to the points being out of range/height; thus I get lots of "inf" points by using "rostopic echo".

I guess the problem is related to a tf ? But I have no clue how to setup it up correctly. Below is the launch file I use. Any help would be appreciated.


<arg name="camera" default="camera" />

<!-- start sensor-->
<include file="$(find openni2_launch)/launch/openni2.launch">
    <arg name="camera" default="$(arg camera)"/>

<!-- run pointcloud_to_laserscan node -->
<node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">

    <remap from="cloud_in" to="$(arg camera)/depth_registered/points"/>
    <remap from="scan" to="$(arg camera)/scan"/>
        #target_frame: # Leave disabled to output scan in pointcloud frame
        transform_tolerance: 0.01
        min_height: 0.0
        max_height: 1.0

        angle_min: -1.5708 # -M_PI/2
        angle_max: 1.5708 # M_PI/2
        angle_increment: 0.087 # M_PI/360.0
        scan_time: 0.3333
        range_min: 0.45
        range_max: 4.0
        use_inf: true
        concurrency_level: 0



edit retag flag offensive close merge delete


yea, you can try depthimage_to_laserscan and post here whether its working or not. Also, are you using rgbd_launch alongwith openni2 to generate the point cloud, because the openni launch file refers to rgbd_launch ?

Raman gravatar image Raman  ( 2015-05-27 06:37:51 -0500 )edit

depthimage_to_laserscan works, but i will ultimately be using a merged point cloud from 2 cameras,so that package isn't usefull. rgbd_launch is indirectly launched from within the openni2.launch, which is launched by my launcher.

Willi gravatar image Willi  ( 2015-05-27 07:32:34 -0500 )edit

3 Answers

Sort by ยป oldest newest most voted

answered 2015-06-08 07:41:45 -0500

paulbovbel gravatar image

updated 2015-06-08 07:47:27 -0500

I have a feeling this is because the output frame of the pointcloud is an _optical frame, which are a little funky ( ).

I should really change the sample launch file to convert to camera_link, which is ENU oriented, so that the laserscan conversions work correctly.

Either way, you can also compile from source, and turn on debug messages, to see specifically why your points are being filtered out.

edit flag offensive delete link more

answered 2016-08-01 21:27:25 -0500

chengwei gravatar image

Hi,Willi I am ROS beginner, now I want to convert pointcloud to laserscan,too. I refer to this web page, and change "target_frame: camera_link", but there is only a little number of laser scan that transform from points in RVIZ.I don't know why ? Can you give me some advice? best regards!

edit flag offensive delete link more


Actually,I could visualize the laserscan points in Rviz after modify the parameters "angle_increment". Tkanks!

chengwei gravatar image chengwei  ( 2016-08-01 21:54:04 -0500 )edit

answered 2015-05-27 06:11:26 -0500

KDROS gravatar image

Flow is correct. I assume you are getting proper point cloud from Asus Xtion. Try same flow with depthimage_to_laserscan package

If you're trying to create a virtual laserscan from your RGBD device, and your sensor is forward-facing, you'll find depthimage_to_laserscan will be much more straightforward and efficient since it operates on image data instead of bulky pointclouds.

edit flag offensive delete link more


At the moment I am only trying to get one camera to work with laser scan. But my goal is to have a laser scan from two merged point clouds from two asus cameras (the merging of the 2 clouds is already done). But I can't seem to get the a proper laser scan output, even from 1 camera.

Willi gravatar image Willi  ( 2015-05-27 06:47:36 -0500 )edit

Asus/kinect just converts FOV to pointcloud using Openni/openni2/freenect. If you will provide proper /tf odometer and with these pointclouds, you will get good laserscan data. How you are merging your pointcloud..using Kinfu or pCL?

KDROS gravatar image KDROS  ( 2015-05-28 00:46:02 -0500 )edit

How you are publishing whole .ply file (merged point cloud), ??? you have to write proper publisher for your generated merged point cloud or publish on the fly.

KDROS gravatar image KDROS  ( 2015-05-28 00:48:26 -0500 )edit

Merged clouds are published correctly. But I am trying first to get 1 camera's laserscan to work with this package. I have noticed that this problem is only with my Indigo install, not with hydro.

Willi gravatar image Willi  ( 2015-05-28 04:02:59 -0500 )edit

I requested for access. or you can mail me Are you using ply_publisher . It was working fine for me. check topic and remap during using pointcloud_to_laserscan.

KDROS gravatar image KDROS  ( 2015-05-28 04:25:06 -0500 )edit

Nope not using ply_publisher. The topic input into pointcloud_to_laserscan seems to be a valid pointcloud. Topic output is mainly "inf" and a bunch of numbers when the camera is tilted.

Willi gravatar image Willi  ( 2015-05-29 02:01:29 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2015-05-27 04:03:50 -0500

Seen: 1,467 times

Last updated: Aug 01 '16