ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

custom generated pointcloud2 is displayed as line on rviz

asked 2022-08-29 09:06:47 -0500

HossamAlzomor gravatar image

updated 2022-08-30 00:36:24 -0500

I am new in ROS, We are developing LiDAR camera and I am trying to interface out LiDAR camera with ROS. I am using ROS 1 noetic on windows 10 I write a python script to convert the depth image generated from out camera to PointCloud2 format and then publish the generated PointCloud2 format.

When I try to display the PiontCloud2 on Rviz , the pointcloud is dislayed as line on screen Snapshoot of PointCloud2

here is a single PointCloud2 sample

header: 
  seq: 8
  stamp: 
    secs: 1661780280
    nsecs: 110538482
  frame_id: "LiDAR1"
height: 80
width: 260
fields: 
  - 
    name: "x"
    offset: 0
    datatype: 7
    count: 20800
  - 
    name: "y"
    offset: 0
    datatype: 7
    count: 20800
  - 
    name: "z"
    offset: 0
    datatype: 7
    count: 20800
is_bigendian: True
point_step: 12
row_step: 3120
data: [130, 115, 134, 191, 118, 171, 231, 190, 23, 217, 78, 64, 103 ....]
is_dense: True
---

I corrected the offset to be 4 for Y and 8 for z

I generate the Point Cloud by converting my depth image using camera parameters, applying the following command and then convert it to string

   #cx shift of camera center from sensor center in x direction measured in pixels
   #cy shift of camera center from sensor center in y direction measured in pixels
   #fx=fy focal length in  meters
   #rows no of rows per frame
   #cols no of columns per frame

    c, r = np.meshgrid(np.arange(-cols/2,cols/2), np.arange(-rows/2,rows/2), sparse=True)
    z = np.float32(depth/1000)
    x = np.float32(z * (c - cx) * pixel_width / fx )
    y = np.float32(z * (r - cy) * pixel_higth / fy )
    ptcloud = np.dstack((x, y, z)).tostring()

my question is where's my mistake, is it in the PointCloud2 Format, or data or in Rviz configuration?

edit retag flag offensive close merge delete

Comments

  1. > I corrected the offset to be 4 for Y and 8 for z. But you have reported all zero offsets. How did you correct it?
  2. ptcloud = np.dstack((x, y, z)).tostring(). What is tostring() doing here?
  3. > convert the depth image generated from out camera to PointCloud2. Why don't you use depth_image_proc?
ravijoshi gravatar image ravijoshi  ( 2022-08-30 02:29:22 -0500 )edit

1> the reported pointcloud was before correction, now When echo the pointcloud2 message , I get offset 4 for y and 8 for z 2>I use tostring to convert numpy 32bit values to list of little endian 8 bits 3>I tried but I am facing some problems as well

HossamAlzomor gravatar image HossamAlzomor  ( 2022-08-31 00:10:01 -0500 )edit

I suspect the way you are composing point cloud. BTW, I have used the depth_image_proc package, and it works smoothly. Would you please share the problems you are facing with it? Then, you can open a new question.

ravijoshi gravatar image ravijoshi  ( 2022-08-31 02:24:14 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted
1

answered 2022-08-30 16:38:48 -0500

Mike Scheutzow gravatar image

If depth_image_proc does not work for your use case, you can create a PointCloud2 yourself. You need to define the fields that your new PointCloud2 will have, create a list of objects (individual points) with corresponding values, then convert the raw data into a message.

There is a python example here: https://gist.github.com/lucasw/ea04dc...

You should use the library function point_cloud2.create_cloud(header, fields, points) to create the actual message.

edit flag offensive delete link more

Comments

I am not using depth_image_proc , I am following the http://docs.ros.org/en/api/sensor_msg... and create all fields, but it seams that the formating of the points is not correct. I'll check the example mentioned in the link

HossamAlzomor gravatar image HossamAlzomor  ( 2022-08-31 00:16:04 -0500 )edit
0

answered 2022-09-01 08:24:28 -0500

HossamAlzomor gravatar image

updated 2022-09-01 08:24:57 -0500

Comparing my code to the link sent by @Mike Scheutzow above and other links, I could found the mistake. Here's the corrected code

c, r = np.meshgrid(np.arange(-cols/2,cols/2), np.arange(-rows/2,rows/2), sparse=True)
z = np.float32(depth) #depth/1000 ==> float point cloud represent points in meters
x = np.float32(z * (c - cx) * pixel_width / fx )
y = np.float32(z * (r - cy) * pixel_higth / fy )
z = z.reshape(1,-1)
x = x.reshape(1,-1)
y = y.reshape(1,-1)
brightness = brightness.reshape(1,-1)
ptcloud = np.dstack([x, y, z, np.float32(brightness)])
return ptcloud.reshape(-1,4).tolist()

and changed msg.fields count to 1 instead of 20800

msg.fields = [
    PointField('x', 0, PointField.FLOAT32, 1),
    PointField('y', 4, PointField.FLOAT32, 1), 
    PointField('z', 8, PointField.FLOAT32, 1), 
    PointField('intensity', 12, PointField.FLOAT32, 1) ]

and created the pointcloud2 using

pc2 = point_cloud2.create_cloud(msg.header, msg.fields, data)

image description

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2022-08-29 09:06:47 -0500

Seen: 318 times

Last updated: Sep 01 '22