ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

ipso's profile - activity

2020-12-16 12:59:18 -0500 received badge  Great Answer (source)
2020-12-16 12:59:18 -0500 received badge  Guru (source)
2018-12-10 08:37:18 -0500 received badge  Great Question (source)
2018-09-10 23:26:05 -0500 received badge  Nice Answer (source)
2018-03-15 04:03:32 -0500 received badge  Good Question (source)
2018-02-22 21:14:50 -0500 received badge  Great Question (source)
2017-08-25 07:49:53 -0500 received badge  Good Question (source)
2017-02-24 08:56:36 -0500 received badge  Good Question (source)
2016-07-06 15:38:11 -0500 received badge  Good Answer (source)
2016-02-14 16:15:31 -0500 received badge  Good Question (source)
2015-05-31 06:01:18 -0500 received badge  Self-Learner (source)
2015-03-23 13:18:30 -0500 commented question Quaternion to Radians

Have you tried Google? Something like "ros quaternion to euler"? We do expect a little effort of your own here ..

2014-08-21 05:29:16 -0500 received badge  Good Question (source)
2014-07-28 11:21:41 -0500 received badge  Nice Question (source)
2014-06-10 15:30:21 -0500 answered a question How to get the number of subscribers on a topic?

uint32_t ros::Publisher::getNumSubscribers() const

Returns the number of subscribers that are currently connected to this Publisher.

2014-01-29 03:14:54 -0500 received badge  Famous Question (source)
2014-01-29 03:13:30 -0500 commented answer Communicating between ROS and VB

@daveayyyy: I haven't used this myself. @mstacho needed some options, so I listed some.

2014-01-28 17:30:17 -0500 marked best answer What determines size of interactive markers in RViz?

As per the title: how does the size of interactive markers get calculated in RViz? In particular the markers for revolute joints and the large interactive marker in the planning_components_visualiser or planning_scene_warehouse_viewer instances of RViz.

Some of my robots get large diameter markers (for their joints), while others seem to have significantly smaller markers. I haven't been able to find any real pattern to this, or relation to something like min/max joint angles or speeds, but my sampling space could just be too small.

2014-01-28 17:27:01 -0500 marked best answer Why does 'top' show three roslaunch processes?

Whenever I use roslaunch to launch a launchfile (ha :)), the output of top (or htop) shows three roslaunch processes. One that is the parent of all nodes, and two more that have no children.

I could probably try to read the source of roslaunch, but does anyone here have any idea what those extra two are doing there? I've never seen more than 2, so it does not seem related to the number of nodes in the graph.

2014-01-28 17:26:47 -0500 marked best answer How to avoid/manage process/thread starvation?

Short version: how to keep one (cpu bound) node from starving all other nodes without explicit sleep()?


I have a system in which one node (in particular) is a cpu hog and also publishes a lot of messages. This node is a source node, with the rest of the nodes waiting and only acting on messages they receive (from the cpu hog, or other nodes).

What I'd like to do is give this source node a lower priority, so that it only gets the cpu if the other nodes are idle (or at least, not very busy). The idea is that the source will still publish ('as fast as possible'), but without getting in the way of the processing nodes.

AFAIK, nice should do this on Linux, and can be used as a launch-prefix to run a node at a specific nice level, but it "doesn't seem to work" in my case. I've verified the nice level with ps axl | grep -i nodename, and it is at 19 (with PRIO at 39), but the other nodes (at NI 0 and PRIO 20) are still starved of cpu and are loosing many messsages (note that higher PRIO values are less likely to be given cpu time).

I'm wondering if nice is the proper way to achieve the described behaviour, or should I use something else?

PS: there is no sleep(..) anywhere in the source node, it's essentially an infinite 'busy' while. Sleeping 'solves' the described issue, but there is no explicit need for periodicity and also seems to incur overhead (ie less messages published than possible). For obvious reasons a request-reply (even if a pub-sub acknowledge) system is undesirable.

2014-01-28 17:26:45 -0500 marked best answer Nodelets: pure virtual method called, process died (-6)

I get the error in the question title everytime one of my Nodelet classes calls ros::requestShutdown(). I've tried debugging it in gdb, but besides the fact that the stacktrace is 5 pages (!) long, it doesn't tell me much, apart from a lot of references to boost::shared_ptr and checked_delete.

Is calling ros::requestShutdown() from a Nodelet unsupported or is this a bug somewhere?

Problem is the crash is causing my Nodelet destructors to not be called, which makes it hard to do proper cleanup of resources.

Breaking using ctrl+c does cleanly shutdown everything and calls destructors.


Edit: @Lorenz, you're right, I should've included more info.

System: Ubuntu Lucid (10.04.3), ROS Electric from debs.

The class that calls ros::requestShutdown() does so in a callback, whether that callback is called by a subscription or as a service doesn't matter.

On second thought it seems like the Bond dtor (frame 10) tries to stop/delete some timer (frame 9) which seems to try to use the ros::TimerManager (frame 7) which might have already been stopped/destructed.

Stacktrace:

pure virtual method called
terminate called without an active exception

Program received signal SIGABRT, Aborted.
0xb7fe2424 in __kernel_vsyscall ()
(gdb) bt
#0  0xb7fe2424 in __kernel_vsyscall ()
#1  0xb785d651 in *__GI_raise (sig=6)
    at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#2  0xb7860a82 in *__GI_abort () at abort.c:92
#3  0xb7aaa52f in __gnu_cxx::__verbose_terminate_handler() ()
   from /usr/lib/libstdc++.so.6
#4  0xb7aa8465 in ?? () from /usr/lib/libstdc++.so.6
#5  0xb7aa84a2 in std::terminate() () from /usr/lib/libstdc++.so.6
#6  0xb7aa9155 in __cxa_pure_virtual () from /usr/lib/libstdc++.so.6
#7  0xb7e883fa in ros::TimerManager<ros::WallTime, ros::WallDuration, ros::WallTimerEvent>::remove(int) ()
   from /opt/ros/electric/stacks/ros_comm/clients/cpp/roscpp/lib/libros.so
#8  0xb7ed9779 in ros::WallTimer::Impl::stop (this=0x8076730)
    at /tmp/buildd/ros-electric-ros-comm-1.6.7/debian/ros-electric-ros-comm/opt/ros/electric/stacks/ros_comm/clients/cpp/roscpp/src/libros/wall_timer.cpp:64
#9  0xb7ed9813 in ros::WallTimer::stop (this=0x806f690)
    at /tmp/buildd/ros-electric-ros-comm-1.6.7/debian/ros-electric-ros-comm/opt/ros/electric/stacks/ros_comm/clients/cpp/roscpp/src/libros/wall_timer.cpp:123
#10 0xb7f30cc3 in ~Bond (this=0x806f420, __in_chrg=<value optimised out>)
    at /tmp/buildd/ros-electric-bond-core-1.6.1/debian/ros-electric-bond-core/opt/ros/electric/stacks/bond_core/bondcpp/src/bond.cpp:91
#11 0x0804f019 in checked_delete<bond::Bond> (this=0x0)
    at /usr/include/boost/checked_delete.hpp:34
#12 boost::detail::sp_counted_impl_p<bond::Bond>::dispose (this=0x0)
    at /usr/include/boost/smart_ptr/detail/sp_counted_impl.hpp:78
#13 0xb7b17779 in boost::detail::sp_counted_base::release (this=0x80791d0, 
    __in_chrg=<value optimised out>)
    at /usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp:145
#14 ~shared_count (this=0x80791d0, __in_chrg=<value optimised out>)
    at /usr/include/boost/smart_ptr/detail/shared_count.hpp:217
#15 ~shared_ptr (this=0x80791d0, __in_chrg=<value optimised out>)
    at /usr/include/boost/smart_ptr/shared_ptr.hpp:169
#16 boost::shared_ptr<bond::Bond>::reset (this=0x80791d0, 
    __in_chrg=<value optimised out>)
    at /usr/include/boost/smart_ptr/shared_ptr.hpp:386
#17 ~Nodelet (this=0x80791d0, __in_chrg=<value optimised ...
(more)
2014-01-28 17:26:23 -0500 marked best answer Calling ros::spinOnce() in nodelets?

Is it possible to integrate a while(true){ros::spinOnce(); ..} loop into a nodelet? I've based mine on the velodyne_driver nodelet, which runs a separate thread which polls the device object.

That while loop never sleeps though, and as the camera driver is a publisher, it doesn't have any callbacks to process, so there is no ros::spinOnce() either. I was wondering if that could be added there (to be able to handle callbacks) or if that would clash with the nodelet's manager?


EDIT: I know the manager handles the spin logic, it actually calls spinOnce() after a 1e6 us usleep (which is equal to what ros::spin() does). What I'm really after is a way to reduce that sleep time. I'm guessing that is not possible without changing and recompiling the nodelet manager, but hoping I'm wrong.

EDIT2: ^ all wrong. See @Lorenz's answer.

2014-01-28 17:26:17 -0500 marked best answer How to initialize a UInt8MultiArray message

Not too much info on these (or any of the variants) on the wiki: what is the proper way to initialize these? I've tried:

const unsigned int data_sz = 10;
std_msgs::UInt8MultiArray m;

m.layout.dim.push_back(std_msgs::MultiArrayDimension());
m.layout.dim[0].size = data_sz;
m.layout.dim[0].stride = 1;
m.layout.dim[0].label = "bla";

// only needed if you don't want to use push_back
m.data.resize(data_sz);

for an array with 10, 1 byte elements, but I keep getting segfaults at the subscriber callback when accessing the data member (FIXED: see edit).

I couldn't find any examples anywhere, apart from http://alexsleat.co.uk/tag/multiarray/, but those snippets do not initialize any dimensions.

Is it programmers responsibility to keep the dimension objects in sync with the actual size of the data member, or is ROS doing some magic behind the scenes at serialization?


edit: as @Lorenz commented: reserve -> resize. I've left the bit about the segfaults in the question, even though that was fixed.

2014-01-28 17:25:54 -0500 marked best answer How to perform an action at nodelet unload / shutdown?

I'm trying to print / publish some statistics / diagnostic info at the time of nodelet unloading / exit. According to this answer on ros-users by Josh Faust the destructor of the nodelet should be used for this.

The problem I'm experiencing is that the text is sometimes printed, sometimes not at all and sometimes I get:

...
[my_nodelet_mgr-2] killing on exit
pure virtual method called
terminate called without an active exception
[rosout-1] killing on exit
[master] killing on exit
...

The nodelet class basically wraps another class with one publisher and one subscriber that does all the work, which is instantiated in MyNodelet::onInit(). Nothing sophisticated. ~MyNodelet() then prints the statistics using the NODELET_* / ROS_* macros after retrieving them from the worker class.

Is the nodelet destructor the proper place to be doing things like this (lacking a nodelet::onExit() or similar)? Any other ideas on how I could implement something like this?

how-to-catch-nodelet-shutdown-signal is related, but it explicitly deals with threading, which I'm not doing.

PS: Could it be that rosout doesn't get the chance to flush it buffers (or something similar)? Even a single NODELET_INFO(..) is printed only some of the time.

2014-01-28 17:25:54 -0500 marked best answer How to debug nodelet (manager) crashes?

Are there best practices regarding debugging crashes of nodelets and / or the manager? I'm trying to load ~30 nodelets into one manager and experience seemingly random crashes. Is there a(n implicit) limit to the number of nodelets?

A sampling of output on the console:

...
[FATAL] [1338308399.389687172]: Service call failed!
[FATAL] [1338308399.389832093]: Service call failed!
[my_nodelet_mgr-2] process has died [pid 23322, exit code -11].
log files: /home/user/.ros/log/1ebdc96e-a9aa-11e1-abdc-d8d385994de6/my_nodelet_mgr-2*.log
[MyNode06-4] process has died [pid 23327, exit code 255].
log files: /home/user/.ros/log/1ebdc96e-a9aa-11e1-abdc-d8d385994de6/MyNode06-4*.log
[MyNode03-11] process has died [pid 23420, exit code 255].
log files: /home/user/.ros/log/1ebdc96e-a9aa-11e1-abdc-d8d385994de6/MyNode03-11*.log
...

Sometimes only ~4 service calls 'fail', sometimes they all seem to fail (always exit code 255, except the manager, which gets a -11).

The log mentioned for the nodelet manager does not exist, but master.log shows a lot of

[Errno 111] Connection refused

and

Fault: <Fault -1: 'publisherUpdate: unknown method name'>

lines.

I must admit I'm rather new to nodelets and their infrastructure, so any guidance would be appreciated.