Ask Your Question

arennuit's profile - activity

2019-09-06 21:08:49 -0500 received badge  Favorite Question (source)
2019-07-12 08:47:30 -0500 received badge  Notable Question (source)
2019-07-12 08:47:30 -0500 received badge  Famous Question (source)
2019-05-30 01:20:18 -0500 marked best answer DWAPlannerROS vs TrajectoryPlannerROS

Hi all,

I am currently looking to implement my own local planner plugin for the navigation stack. I was looking at DWAPlannerROS and TrajectoryPlannerROS to help me start with. Though I am not sure I understand the relation between these 2 planners. Are they just 2 different local planners or is one part of the second?

I am asking because, as far as I understand from this wiki and the source, the DWA algorithm is implemented twice. Once in TrajectoryPlannerROS and another time in DWAPlannerROS.

Am I correct in saying so? And if so, is there a reason why the DWA is coded twice?

Now, if I understand this section, I believe the recommended way to implement a new local planner plugin is to inherit DWAPlanner. Do you confirm?

I am asking because there is a clean tutorial so as to write custom global planners, but I could not find an equivalent for local planners.



2019-04-27 00:46:33 -0500 marked best answer tf2 example?

Hi all,

Trying to learn ROS, I understand tf is deprecated and tf2 is the currently maintained version. Now the tutorials for tf2 are rather wrong as stated here and here (and as I actually experienced just now). Is there a simple example somewhere I can rely on to know how to listen and broadcast frames?



2019-04-05 11:32:12 -0500 received badge  Famous Question (source)
2019-03-16 03:02:56 -0500 marked best answer UR5/UR10 typical use cases

Hi all,

We are currently reviewing the UR5 arm specifications to make sure it can cope with all our requirements and later invest and buy one arm. One criterion on which we focus is the possibility for the arm to follow a trajectory planned on an external PC. Here is our desired setup:

  • We have a ROS external PC

    • This external PC generates trajectories for the arm
  • The planned trajectory is sampled on the ROS PC

    • Each sample is individually sent to the UR controller (via Ethernet): we want no interpolation between samples

As far as I know the communication between the ROS PC and the UR controller is rather limited in terms of frequency (50Hz for UR script and 120Hz for the C-API). This limits the complexity and speed of the trajectory the ROS PC can specify to the UR controller - as you can only send 50 samples per second (complex and quick trajectories may require more samples than this).

In order to get a better understanding of the applications and use cases possible, I have a few questions:

  • Are there people around here who have the same setup?

  • What kind of application and use case do you use this setup for?

  • Are you somehow limited in what you do, by the communication frequency?

  • Do you use UR script (ur_bringup) or the C-API (ur_c_api_bringup)?

  • Do you have the UR controller interpolate between the trajectory samples you provide?

--- EDIT ---

What is meant by "we want no interpolation between samples" relates to the 2 control modes usually possible when streaming desired configuration samples to a robot controller periodically (a sample = position + velocity + acceleration):

  1. Either the controller interpolates between the samples, so these samples are seen as the 2 extremities of a new (sub)trajectory

    • This mode allows to specify consecutive samples which are rather far away (as they are interpolated)

    • But the mode is not compatible with all applications as the samples may themselves come from a higher level planned trajectory (and you do not really want a "black box" interpolation in this case)

  2. Or the UR controller uses an incoming sample "as is" and straightforwardly as a new set point directly provided to the low-level control of the arm. The low-level joint PID (or whatever type of low-level control is used) makes sure the set point is reached as fast as possible

    • This requires the contiguous samples to be close in space

    • This mode allows an external industrial PC to generate its own trajectory and have it executed by the UR5 arm. This is the mode I am interested in

Now "we want no interpolation between samples" means we do not want behavior 1: what we look for is behavior 2.

Kind regards,


2019-03-13 17:23:34 -0500 received badge  Popular Question (source)
2019-03-13 07:25:40 -0500 edited question LIDAR based localizer

LIDAR based localizer Dear all, We have experience in working on 3D slam and we now would like to use the environment's

2019-03-13 07:24:11 -0500 asked a question LIDAR based localizer

LIDAR based localizer Dear all, We have experience in working on 3D slam and we now would like to use the environment's

2019-03-13 07:22:52 -0500 asked a question LIDAR localization

LIDAR localization Dear all, We have experience in working on 3D slam and we now would like to use the environment's 3d

2019-03-01 05:16:35 -0500 received badge  Famous Question (source)
2019-01-13 22:15:22 -0500 received badge  Popular Question (source)
2018-11-13 20:43:01 -0500 received badge  Famous Question (source)
2018-11-09 00:44:26 -0500 received badge  Nice Question (source)
2018-11-08 03:09:01 -0500 marked best answer rc_control controllers modes

Dear all,

I am currently checking the ros_controllers and I am not sure I understand the way the different modes are organized. From the ros_control wiki I read the controllers can be either:

  1. effort_controllers
    1. joint_effort_controller
    2. joint_position_controller
    3. joint_velocity_controller
  2. position_controllers
    1. joint_position_controller
  3. velocity_controllers
    1. joint_velocity_controllers

From this I understand there are effort, position and velocity controllers which respectively take a desired effort, position or velocity as input and do their best to get the system state to this desired input (these controllers correspond to entries 1., 2. and 3.).

Now, what I do not understand is the meaning of sub-categories 1.1., 1.2., 1.3., 2.1 ... If I choose 1.2. for example, what's this controller? It takes a desired effort as input and probably does something related to a position as its name implies... But what?

Also controller 1.2. == 2.1. and controller 1.3. == 3.1., how is that possible? I guess it is related to my first question...

Anyone with a better understanding than me?

Thanks guys,

Antoine Rennuit.

2018-11-04 05:45:09 -0500 commented question Asus Xtion optical center

This looks very promising. And do you have any idea how the people from the hector team got their information from? Is t

2018-11-03 03:35:28 -0500 edited question Asus Xtion optical center

Asus Xtion optical center Hi all, I am doing SLAM with the Asus Xtion. Is there some information somewhere on the Inter

2018-11-03 03:35:00 -0500 edited question Asus Xtion optical center

Asus Xtion camera frame Hi all, I am doing SLAM with the Asus Xtion. Is there some information somewhere on the Interne

2018-11-03 03:12:25 -0500 asked a question Asus Xtion optical center

Asus Xtion camera frame Hi all, I am doing SLAM with the Asus Xtion. Is there some information somewhere on the Interne

2018-10-26 05:11:52 -0500 received badge  Famous Question (source)
2018-10-02 10:29:38 -0500 marked best answer Communication with UR5

Hi all,

I plan to use the ROS-Industrial universal-robot package (found here) with a real UR5. Though as I do not have the arm at hands currently, I am not sure how the communication is intended to be handled between the ROS machine and the UR controller. So I have a few questions related to this:

  • How is the information physically transported from the ROS machine to the UR controller?
    • Is this Ethernet? Modbus? GPIO? USB?
  • What type of information is sent to the controller?
    • Is this a desired FollowJointTrajectoryGoal? It is not clear to me from these slides...
    • So does that mean there is a ROS node on the UR controller?
    • If not, what kind of information is sent? And how is it received by the UR controller?
  • What happens if the robot cannot reach its desired trajectory? (e.g. it collides with something unexpected or so...)
    • Is the ROS machine notified?

I am mainly interested by the C-API version found in branch hydro-devel-c-api as it looks most advanced, though I am also interested to know whether things work the same way when using UR script.

Anyone has a better understanding than me on the above questions?



2018-08-01 03:56:51 -0500 received badge  Notable Question (source)
2018-07-23 01:52:59 -0500 commented question Message-generation package not found

The problem was totally unlinked to ROS, sorry.

2018-07-20 04:09:00 -0500 received badge  Popular Question (source)
2018-07-17 00:02:42 -0500 edited question Message-generation package not found

Message-generation package not found Hi all, Since a few days I have had the following error mesage: CMake Warning at

2018-07-17 00:02:23 -0500 asked a question Message-generation package not found

Message-generation package not found Hi all, Since a few days I have have the following error mesage: CMake Warning at

2018-07-15 16:49:26 -0500 asked a question Service server coded by hand

Service server coded by hand Hi all, We use a control library (called orocos). Associated to this library there is a me

2018-07-15 00:27:35 -0500 received badge  Popular Question (source)
2018-07-13 10:43:02 -0500 received badge  Famous Question (source)
2018-07-13 08:25:00 -0500 asked a question Migrating a custom library package used outside of ROS

Migrating a custom library package used outside of ROS Hi all, We have a custom ROS package which exposes ROS services

2018-07-03 21:00:39 -0500 marked best answer CMakeLists.txt vs package.xml

Hi all,

I have been using ROS for a bit more than a year now and there is still something I cannot understand in the relation between the CMakeLists.txt used and the extra catkin-specific information added by package.xml.

For a package all the link-time and run-time dependencies are already described inside the CMakeLists.txt, so why do we need to re-state these dependencies inside the package.xml? Firstly the information seems redundant and secondly when is the information inside package.xml used? What is it used for?

I have looked for the information on the net but no explanation was clear to me.


And thanks for your thorough answer. Things are getting a bit more clear.

Now there is still something I am missing: the package.xml page states that

If they are missing or incorrect, you may be able to build from source and run tests on your own machine, but your package will not work correctly when released to the ROS community.

Now I am not sure I understand how this mechanism works. Does it mean that the dependencies normally delt with the CMakeLists.txt when I am on my local machine are handled by ROS when I deploy my packages to the outside world?

2018-05-14 05:13:05 -0500 marked best answer Callback on transform available

Dear all,

I would like to put in place a callback each time a new value of a given source frame in a given target frame is available.

You can do this with a listener (waitForTransform() + lookupTransforms()) but I find this is a rather dirty solution. So what I would like to do is use something like a tf::MessageFilter. The problem is tf::MessageFilter does not work for tf2_msgs::TFMessage (the message type for topic /tf).

  1. Any idea of the best practise to do that?
  2. Also, why isn't this the default solution to work with transforms? This is what most people most likely want to do, no?



2018-04-20 13:53:40 -0500 marked best answer Task scheduler recommendation/advice

Hi all,

I am looking for some advise on choosing a way/tool to achieve high level control (or say tasks scheduling) with ros. I am naturally more inclined to using scripting (e.g. in python as naturally it integrates with ros) because I have been programming for years and I am convinced of the versatility of languages.

Though I was wondering whether there were other solutions that you would recommend based on experience (e.g. solutions that improve workflow, are more something-friendly... and do not limit the expressive power)



2018-03-30 01:54:59 -0500 marked best answer Check master ros node is started C++


Is there a way in C++ to check whether the master ros node is setup? I initially believed ros::init() would return false, but the return type is false...

I have quickly looked on the internet and could find nothing...

Thanks ;)

2018-01-06 05:17:36 -0500 received badge  Good Question (source)
2017-12-28 00:31:54 -0500 received badge  Famous Question (source)
2017-12-28 00:31:54 -0500 received badge  Notable Question (source)
2017-11-26 17:05:09 -0500 marked best answer MoveIt: get current desired end effector pose

Hello there,

I am trying to get access to the current desired end effector's transform output by MoveIt (actually my goal is to compute the tracking error). My feeling is that I need to do the following steps:

  • Get the desired joint angles from MoveIt
  • Parse the urdf (to build a direct geometric model), or is there a package which computes the direct geometric model of the robot out of its urdf?
  • Compute the end effector's desired transform from the geometric model and the desired joint angles

Am I right?

Now a question more related to code:

I guess the desired joint angles are provided by moveit in topic /arm_controller/follow_joint_trajectory/goal, no? This topic is of type control_msgs/FollowJointTrajectoryActionGoal (api in ROS Kinetic). I also guess the trajectory is in field goal.trajectory.points.positions. fields points and positions are arrays: do you know how I should interpret them? Also does the fact that these are arrays mean that the whole desired trajectory is sent at one and not per small time steps?



2017-11-17 05:38:54 -0500 received badge  Notable Question (source)
2017-10-30 11:57:37 -0500 received badge  Famous Question (source)
2017-10-26 16:58:33 -0500 marked best answer Navigation stack, no laser


I have put in place a (custom) mobile robot with a drive controller (taking a desired twist and outputting required wheel velocities) and joint controllers which allow me to actually get the required wheel velocities. This is all done in ros_control.

Now I would like to drive my robot providing it with a desired trajectory (or desired target) and not with a twist (i.e. specify my trajectory in 6D position space, not in twist space). From what I understand, this is the aim of the navigation stack. But the navigation stack's wiki actually states the stack

requires a planar laser mounted somewhere on the mobile base. This laser is used for map building and localization.

Unfortunately my robot is not planning to integrate such a device. Does that mean the stack is unusable to me?

I am not sure I understand why there is such a hard requirement. There are plenty of uses cases for using other localization technique (e.g. triangulation, or video...), or even no localization technique at all (and thus accept drifts), and still expecting to transform a desired positional trajectory into desired velocities (to feed the controllers). Actually I am pretty sure some low layer of the navigation is doing just that. So isn't it accessible to robots having no laser?

Is it me not understanding things or is there something else? As you understood I am looking for a tool taking a desired trajectory or position in input and outputting corresponding twists...