Ask Your Question

Trouble viewing Lidar data in rviz

asked 2012-12-08 08:46:22 -0500

metal gravatar image

updated 2014-01-28 17:14:31 -0500

ngrennan gravatar image

Hello everyone,

I am working on a project where it requires for me to fuse lidar data and also the data from a stereo camera(bumblebee).

The main issue for now ,the Lidar data which is being viewed in rviz is not showing the coordinates in the z-axis. Please have a look at this video for more detailed picture of my project: . As you see towards the end of the video the point clouds are set flat on the screen. I want to be able to do 3D rendering like the PR2 does.

Also I am using the hokuyo Lidar(URG-04LX),Can it do the 3D rendering as I expect it to?. I think there is a need to have an on-board IMU for this problem. Please share your views. Thank you.


edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2012-12-08 09:07:03 -0500

Ryan gravatar image

Hi Karthik,

What you'll need to do is make sure that the frame that the LIDAR data is broadcast in (set in the Header) is referenced against the world frame with an appropriate transform. This transform needs to incorporate the roll, pitch, and yaw from the powered mounting you've built. I presume that you have a node which controls the LIDAR mount. In there, you could use the tf library (Tutorials) to broadcast a transform which relates the stationary part of the LIDAR mount to the LIDAR itself.

If you use the de-facto naming convention, you would wind up with a /base_link -> /lidar transformation. Then, set rviz to use /base_link as the fixed frame and make sure that /lidar is set as the frame_id in the header of your LIDAR data. rviz will now be able to show the LIDAR scans in 3D. Without the transform, rviz doesn't know anything about how the LIDAR is moving with respect to the world.

edit flag offensive delete link more


Yes there is a node which controls roll,pitch and yaw of the whole sensor module. The lidar tilt (as you can see) is done by a dynamixel and I will read its current position. Like wise the current position of the other dynamixels can be read too. Can this be used for tf publishing ?.

metal gravatar image metal  ( 2012-12-08 17:44:31 -0500 )edit

Also,I am controlling the sensor head through a joy stick(whose commands are sent from a node).

metal gravatar image metal  ( 2012-12-08 18:55:03 -0500 )edit

Yes, you should be able to modify that node to publish a suitable tf.

Ryan gravatar image Ryan  ( 2012-12-08 20:06:22 -0500 )edit I came across this ,I think it would be helpful for my issue. Any thoughts ?.

metal gravatar image metal  ( 2012-12-09 08:43:39 -0500 )edit

answered 2012-12-08 09:08:35 -0500

You probably have the fixed_frame field in rviz set to your laser frame. The laser scans are always 2D in their own frame. You will need to publish a transform from some fixed frame (for example, the base of the arm) to the laser frame. The transform can be computed using the forward kinematics of the arm. There might be a package that already does that, depending on what arm you're using.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2012-12-08 08:46:22 -0500

Seen: 1,242 times

Last updated: Dec 08 '12