ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Speedup publisher having huge amount of data

asked 2018-11-13 04:23:50 -0500

ravijoshi gravatar image

I am trying to publish point cloud and visualizing inside RViz in C++. There are approximately 8,000,000 points in the point cloud.

Below are the details of the environment-

  • Intel® Xeon(R) CPU E5-2640 v2 @ 2.00GHz × 17
  • GeForce GTX 1050 Ti/PCIe/SSE2
  • 32 GB Memory
  • Ubuntu 14.04 LTS 64 Bit OS

I noticed that publishing this point cloud at 5 FPS is taking 100 ms approximately. I monitored the rate at which messages are being published using rostopic hz topic and found the following info-

average rate: 0.445
    min: 0.595s max: 4.001s std dev: 0.82609s window: 113

After changing publishing frequency to 1 FPS, below is output returned by rostopic hz topic-

average rate: 0.552
    min: 0.200s max: 134.727s std dev: 11.34498s window: 204

Now, the command rostopic bw topic is returning the following info-

average: 230.46MB/s
    mean: 199.07MB min: 199.07MB max: 199.07MB window: 3

Later, I used nodelet and got little better results. rostopic hz topic returns the following-

average rate: 2.481
    min: 0.265s max: 0.608s std dev: 0.03269s window: 630

rostopic bw topic is returning the following info-

average: 494.49MB/s
    mean: 199.07MB min: 199.07MB max: 199.07MB window: 100

Below is the code snippet-

ros::Publisher pub_output = nh.advertise<sensor_msgs::PointCloud2>("points", 1);

// number of points in point cloud
int n = 8000000;

// create a point cloud with random data
pcl::PointCloud<pcl::PointXYZRGB> cloud;
cloud.width = n;
cloud.height = 1;
cloud.is_dense = false;
cloud.points.resize(cloud.width* cloud.height);

for (size_t i = 0; i < cloud.points.size(); ++i) {
    cloud.points[i].x = 1024 * rand() / (RAND_MAX + 1.0f);
    cloud.points[i].y = 1024 * rand() / (RAND_MAX + 1.0f);
    cloud.points[i].z = 1024 * rand() / (RAND_MAX + 1.0f);
    cloud.points[i].r = 255.0 * rand();
    cloud.points[i].g = 255.0 * rand();
    cloud.points[i].b = 255.0 * rand();
}

// convert to PointCloud2
sensor_msgs::PointCloud2 temp_cloud;
pcl::toROSMsg(cloud, temp_cloud);
// attach a frame
temp_cloud.header.frame_id = "base";

boost::shared_ptr<sensor_msgs::PointCloud2> cloud2 = boost::make_shared<sensor_msgs::PointCloud2>(temp_cloud);

ros::Rate loop_rate(5);
while (ros::ok()) {
    // update the timestamp
    cloud2->header.stamp = ros::Time::now();
    pub_output.publish(cloud2);

    ros::spinOnce();
    loop_rate.sleep();
}

It clearly indicates that despite having enough resources, ROS is not able to publish the huge amount of data at high speed.

Is there any way to speed up the publisher while simultaneously visualizing it inside RViz. Please note that the publisher and RViz are in the same machine.

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
1

answered 2018-11-13 04:45:07 -0500

ROS passes messages between different nodes using sockets even if they're on the same machine. This serialisation -> socket -> unserialisation appears to be the bottle neck you're hitting. There is no way around this in ROS that I know of.

If you're only interested in visualising this point cloud in RVIZ and nothing else, there is one messy option that completely breaks the design ethos of ROS! You could generate and publish this point cloud using an RVIZ plugin, this would mean that the intra-process method of message passing would be used. This method uses shared pointers so doesn't involve copying any data and is significantly faster for large messages.

Obviously this is horrific from a design point of view, and will break completely if any nodes outside of RVIZ subscribe to the topic. But it is the only thing I can think of that may solve your problem, maybe someone who known more about the inner workings of ROS could suggest a better solution.

The RVIZ plugin tutorials can be found here, if you want to go down this route.

edit flag offensive delete link more

Comments

Thanks for suggesting a workaround of designing RVIZ plugin. I need to think. Anyway. regarding the real issue, the serialization-deserialization seem real culprit. I am wondering, since it is a CPU intensive job, and my workstation is having 32 cores with enough memory. What is the issue then?

ravijoshi gravatar image ravijoshi  ( 2018-11-13 06:56:53 -0500 )edit

The issue is with the fact that you have a very large message. From my own experiments (from a few years ago, but the basic (de)serialisation algorithm hasn't changed) (de)serialisation is CPU bound, and then mostly limited by the cache sizes of your CPU. There is a very clear break in the ..

gvdhoorn gravatar image gvdhoorn  ( 2018-11-13 07:22:18 -0500 )edit

.. trend lines of (de)serialisation performance vs message size at the sizes of the L1, L2 and L3 caches. Msgs become memory bandwidth bound at that point.

This is not a ROS problem. It's a generic issue for any process that sends or receives data over a network (or even between multiple ..

gvdhoorn gravatar image gvdhoorn  ( 2018-11-13 07:24:32 -0500 )edit

.. processes on the same machine). (De)serialisation is always needed when crossing memory isolation barriers, whether artificial (ie: OS process isolation) or physical (ie: different machines on a network).

Using a fully in-memory, zero-copy transport can help (as you noticed yourself ..

gvdhoorn gravatar image gvdhoorn  ( 2018-11-13 07:26:10 -0500 )edit

.. when trying out nodelets), but only so much. At some point the data will have to be (de)serialised, and you run into the same issues again.

gvdhoorn gravatar image gvdhoorn  ( 2018-11-13 07:27:03 -0500 )edit

As to writing an RViz plugin for visualising point clouds: that is of course a work-around, but please keep in mind that RViz is not a generic "data type X viewer". It's a debugging tool / programmer's interface that lets you snoop on datastreams in your ROS application.

gvdhoorn gravatar image gvdhoorn  ( 2018-11-13 07:31:47 -0500 )edit
1
gvdhoorn gravatar image gvdhoorn  ( 2018-11-13 09:14:54 -0500 )edit

Thanks, @gvdhoorn for explaining the core issue. Also thank you very much for citing the latest research work. I am having a look over it. If needed, I will ask separate question later on regarding this research work.

ravijoshi gravatar image ravijoshi  ( 2018-11-14 07:05:24 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2018-11-13 04:23:50 -0500

Seen: 1,043 times

Last updated: Nov 13 '18