ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2018-01-30 04:26:06 -0500 | received badge | ● Notable Question (source) |
2016-11-08 13:50:53 -0500 | received badge | ● Enthusiast |
2016-10-27 09:15:34 -0500 | commented question | Kinetic Kame build very slow That might be a good idea - I'm a little worried this isn't a ROS issue, per se, but a compiler issue (of sorts). Clang goes from version 3.4 to 3.8 (Trusty->Xenial) and GCC goes from 4.8 to 5.. Though I am surprised more people haven't noticed similar effects... |
2016-10-27 08:50:53 -0500 | commented question | Kinetic Kame build very slow Yes, sorry (finding the character limit here quite burdensome...) Builds are happening in docker containers on the same system, same source code. I should note that above results were with default gcc. Switching to clang, there is a consistent factor of 2 slow down from Jade->Kinetic. |
2016-10-27 08:21:59 -0500 | commented question | Kinetic Kame build very slow Trying to standardize my tests, costmap_2d from source (+ map_server, voxel_grid)
1 + catkin build = 33s 2 + catkin build = 2m57s 1 + catkin_make = 37s 2 + catkin_make = (seems to stall) |
2016-10-27 05:59:09 -0500 | commented question | Kinetic Kame build very slow Catkin tools for me. Same version of catkin tools on both systems. Going to try with 'catkin_make' today. |
2016-10-26 14:57:17 -0500 | commented question | Kinetic Kame build very slow After some further testing, I should add that I am comparing the performance of (Ubuntu 16.04 + ROS Kinetic + Clang-3.8) with (Ubuntu 14.04 + ROS Jade + Clang-3.4) and seeing an individual package build time go from 40 seconds with Jade/etc to 65 seconds with Kinetic/etc. |
2016-10-26 12:40:10 -0500 | received badge | ● Supporter (source) |
2016-10-26 12:40:07 -0500 | commented question | Kinetic Kame build very slow I've just noticed the same thing! My first guess is that there are a lot of new warnings when building against ROS Kinetic... |
2016-09-07 16:23:44 -0500 | received badge | ● Popular Question (source) |
2016-07-27 11:38:50 -0500 | received badge | ● Teacher (source) |
2016-07-27 11:38:50 -0500 | received badge | ● Self-Learner (source) |
2016-07-27 10:18:56 -0500 | received badge | ● Scholar (source) |
2016-07-27 09:33:45 -0500 | answered a question | Problems launching many nodes on OSX Ok, almost 24 hours later and I have a solution (but not quite a full explanation). It comes down to "something" that changed in the OSX build of python 2.7 on El Capitan with respect to handling sockets. The ROS I don't know what changed in OSX 10.11, maybe the rate at which the kernel allows connections or maybe how To increase the |
2016-07-26 11:53:28 -0500 | received badge | ● Student (source) |
2016-07-26 10:38:20 -0500 | received badge | ● Organizer (source) |
2016-07-26 10:36:43 -0500 | asked a question | Problems launching many nodes on OSX I recently did a fresh install of ROS-jade on OSX El Capitan and for the most part everything is working well. However, I have noticed that when launching large launch files (e.g., 100 nodes), there seems to be problems accessing the ROS master (registering pub/sub, reading params, etc). In an effort to test things, I wrote a simple test node: And ran 100 versions of it in a launch file (with nodes 1 through 100). Some nodes seem to run ok, but then I see errors like these (which are similar to what I saw in more complicated, real-world examples): I don't think that I have experienced this problem before upgrading to El Capitan - has anyone else seen similar problems or have thoughts on things to try? I did a quick check on a Ubuntu machine (in a virtualized Docker image) and everything worked fine... |
2015-11-25 10:03:11 -0500 | commented answer | Rosbridge only one way on my OSX Just posted something about this on the rosbridge_suite issues page. Turns out (for me) that the fix is to pip install pymongo rather than bson. Same module name, different API. |