ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A
Ask Your Question

jonfink's profile - activity

2018-01-30 04:26:06 -0600 received badge  Notable Question (source)
2016-11-08 13:50:53 -0600 received badge  Enthusiast
2016-10-27 09:15:34 -0600 commented question Kinetic Kame build very slow

That might be a good idea - I'm a little worried this isn't a ROS issue, per se, but a compiler issue (of sorts). Clang goes from version 3.4 to 3.8 (Trusty->Xenial) and GCC goes from 4.8 to 5..

Though I am surprised more people haven't noticed similar effects...

2016-10-27 08:50:53 -0600 commented question Kinetic Kame build very slow

Yes, sorry (finding the character limit here quite burdensome...)

Builds are happening in docker containers on the same system, same source code. I should note that above results were with default gcc. Switching to clang, there is a consistent factor of 2 slow down from Jade->Kinetic.

2016-10-27 08:21:59 -0600 commented question Kinetic Kame build very slow

Trying to standardize my tests, costmap_2d from source (+ map_server, voxel_grid)

  1. Trusty + Jade (catkin-tools v0.4.2, catkin v0.6.18)
  2. Xenial + Kinetic (catkin-tools v0.4.2, catkin v0.7.4)

1 + catkin build = 33s 2 + catkin build = 2m57s 1 + catkin_make = 37s 2 + catkin_make = (seems to stall)

2016-10-27 05:59:09 -0600 commented question Kinetic Kame build very slow

Catkin tools for me. Same version of catkin tools on both systems. Going to try with 'catkin_make' today.

2016-10-26 14:57:17 -0600 commented question Kinetic Kame build very slow

After some further testing, I should add that I am comparing the performance of (Ubuntu 16.04 + ROS Kinetic + Clang-3.8) with (Ubuntu 14.04 + ROS Jade + Clang-3.4) and seeing an individual package build time go from 40 seconds with Jade/etc to 65 seconds with Kinetic/etc.

2016-10-26 12:40:10 -0600 received badge  Supporter (source)
2016-10-26 12:40:07 -0600 commented question Kinetic Kame build very slow

I've just noticed the same thing! My first guess is that there are a lot of new warnings when building against ROS Kinetic...

2016-09-07 16:23:44 -0600 received badge  Popular Question (source)
2016-07-27 11:38:50 -0600 received badge  Teacher (source)
2016-07-27 11:38:50 -0600 received badge  Self-Learner (source)
2016-07-27 10:18:56 -0600 received badge  Scholar (source)
2016-07-27 09:33:45 -0600 answered a question Problems launching many nodes on OSX

Ok, almost 24 hours later and I have a solution (but not quite a full explanation).

It comes down to "something" that changed in the OSX build of python 2.7 on El Capitan with respect to handling sockets. The ROS ThreadingXMLRPCServer python class inherits from the SimpleXMLRPCServer class which inherits the SocketServer.TCPServerclass. The TCPServer class sets a default parameter request_queue_size=5 which controls how big it will allow the incoming connection queue to become (that is, the queue of _new_ connections - this value gets passed down to the python socket.listen() function)

I don't know what changed in OSX 10.11, maybe the rate at which the kernel allows connections or maybe how socket.listen() is implemented. Either way, I found any easy fix to be changing the ROS ThreadingXMLRPCServer class:

class ThreadingXMLRPCServer(socketserver.ThreadingMixIn, SimpleXMLRPCServer):
    Adds ThreadingMixin to SimpleXMLRPCServer to support multiple concurrent
    requests via threading. Also makes logging toggleable.
    def __init__(self, addr, log_requests=1):
        Overrides SimpleXMLRPCServer to set option to allow_reuse_address.
        # allow_reuse_address defaults to False in Python 2.4.  We set it
        # to True to allow quick restart on the same port.  This is equivalent
        # to calling setsockopt(SOL_SOCKET,SO_REUSEADDR,1)
        self.allow_reuse_address = True
        self.request_queue_size = 128

To increase the request_queue_size variable. I'll make an issue/pull-request to the ros_comm repos on github.

2016-07-26 11:53:28 -0600 received badge  Student (source)
2016-07-26 10:38:20 -0600 received badge  Organizer (source)
2016-07-26 10:36:43 -0600 asked a question Problems launching many nodes on OSX

I recently did a fresh install of ROS-jade on OSX El Capitan and for the most part everything is working well. However, I have noticed that when launching large launch files (e.g., 100 nodes), there seems to be problems accessing the ROS master (registering pub/sub, reading params, etc).

In an effort to test things, I wrote a simple test node:

#include <ros/ros.h>
#include <std_msgs/Float64.h>

void onFloatSub(const std_msgs::Float64::ConstPtr& msg)


int main(int argc, char *argv[])
  ros::init(argc, argv, "test_node");

  ros::NodeHandle pnh("~");

  ros::Subscriber sub = pnh.subscribe<std_msgs::Float64>("float_sub", 1, &onFloatSub);
  ros::Publisher pub = pnh.advertise<std_msgs::Float64>("float_pub", 1);

  ros::Rate r(10.0);
  while(ros::ok()) {

    std::string test_param_value;
    if(!pnh.getParam("test_param", test_param_value)) {
      ROS_ERROR("Unable to read local 'test_param'");

    if(test_param_value != std::string("test_param_value_set"))
      ROS_ERROR("Incorrect test_param_value!");



  return 0;

And ran 100 versions of it in a launch file

<node pkg="test_many_node" type="test_many_node" name="test_node_1" output="screen">
    <param name="test_param" value="test_param_value_set"/>

(with nodes 1 through 100).

Some nodes seem to run ok, but then I see errors like these (which are similar to what I saw in more complicated, real-world examples):

process[test_node_53-54]: started with pid [25907]
process[test_node_54-55]: started with pid [25908]
process[test_node_55-56]: started with pid [25909]
process[test_node_56-57]: started with pid [25910]
[ERROR] ros.test_many_node> Unable to read local 'test_param'
[ERROR] ros.test_many_node> Incorrect test_param_value!
[ERROR] ros.test_many_node> Unable to read local 'test_param'
[ERROR] ros.test_many_node> Incorrect test_param_value!
[ERROR] ros.roscpp> [registerService] Failed to contact master at [localhost:11311].  Retrying...

I don't think that I have experienced this problem before upgrading to El Capitan - has anyone else seen similar problems or have thoughts on things to try? I did a quick check on a Ubuntu machine (in a virtualized Docker image) and everything worked fine...

2015-11-25 10:03:11 -0600 commented answer Rosbridge only one way on my OSX

Just posted something about this on the rosbridge_suite issues page. Turns out (for me) that the fix is to pip install pymongo rather than bson. Same module name, different API.