ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

We're going to need to know what sort of hardwareInterface you've added to your transmission.

If you have (let's say) a velocity interface, and would like to layer a position controller on-top, you'd use the velocity_controllers/JointPositionController. This takes in position setpoints (ie: radians) and outputs velocities (ie: rad/s).

Similarly: if you'd have an effort interface in your transmission, you'd use the effort_controllers/JointPositionController (which, as you may have guessed: takes in a position and outputs an effort).

I believe this would result in what you describe as a "encoder-motor joint".


Something to clear up for me now: you mention Twist, but then ask about a position controlled system. geometry_msgs/Twist encodes velocities, not positions. Could you clarify how this works in your system?

We're going to need to know what sort of hardwareInterface you've added to your transmission.

If you have (let's say) a velocity interface, and would like to layer a position controller on-top, you'd use the velocity_controllers/JointPositionController. This takes in position setpoints (ie: radians) and outputs velocities (ie: rad/s).

Similarly: if you'd have an effort interface in your transmission, you'd use the effort_controllers/JointPositionController (which, as you may have guessed: takes in a position and outputs an effort).

I believe this would result in what you describe as a "encoder-motor joint".

See velocity_controllers/velocity_controllers_plugins.xml for a (very) terse description of the various velocity outputting controllers in ros_control.


Something to clear up for me now: you mention Twist, but then ask about a position controlled system. geometry_msgs/Twist encodes velocities, not positions. Could you clarify how this works in your system?

We're going to need to know what sort of hardwareInterface you've added to your transmission.

If you have (let's say) a velocity interface, and would like to layer a position controller on-top, you'd use the velocity_controllers/JointPositionController. This takes in position setpoints (ie: radians) and outputs velocities (ie: rad/s).

Similarly: if you'd have an effort interface in your transmission, you'd use the effort_controllers/JointPositionController (which, as you may have guessed: takes in a position and outputs an effort).

I believe this would result in what you describe as a "encoder-motor joint".

See velocity_controllers/velocity_controllers_plugins.xml for a (very) terse description of the various velocity outputting controllers in ros_control.


Something to clear up for me now: you mention Twist, but then ask about a position controlled system. geometry_msgs/Twist encodes velocities, not positions. Could you clarify how this works in your system?


Edit:

By encoder-motor joint I mean that the encoder data (current position) has a direct effect on the motion of the motor (the joint). [..] Give the position to the joint, and have the joint move from where it is to the new position (which would allow the lidar to collect data during the motion).

This sounds like a regular, closed-loop position controller.

The Twist data is used for the diff_drive controller to drive the robot around. At the same time one of my lidar nodes subscribes to the topic on which the Twist message arrives, and based on the type of motion (drive straight, slight turn, rotate on the spot) the code determines the motion of the lidar. If robot is going straight the lidar sweeps directly in front, if robot is turning the lidar sweeps a bit of the front and the side, if robot is rotating on the spot the lidar rotates. The purpose here is to use the lidar to get the data that is most useful (ex: don't sweep behind the robot when we drive straight). I am trying to replicate this behavior (that works in the physical robot) in Gazebo.

Is this already a separate node (ie: the algorithm that controls the lidar based on vehicle ego motion)? If so, provided you use the same control interfaces and datastreams with your Gazebo simulation, this should work (ignoring tuning things to accommodate the slightly different behaviour between Gazebo and the real world of course).

I put <hardwareInterface>hardware_interface/VelocityJointInterface</hardwareInterface> in the urdf, and type: velocity_controllers/JointPositionController in the xacro file and everything seems to work.

Do you mean "the .yaml controller configuration file"? Because you don't typically configure controllers via/with/in a .xacro.

The only issue is that you cannot specify the direction. Whether you give a positive or negative radian value it moves to that point from its current position in the shortest path around the joint,

Yes, that would be the way the (closed-loop) position controllers are implemented.

and this doesn't work for my application as sometimes I need to sweep in an opposite direction. Is it possible to configure the direction of motion somehow?

Unfortunately I don't believe this is configurable. See here for the code which determines the shortest distance. For the average case this behaviour makes sense I would say.

Could you clarify what sort of behaviour you are looking for exactly?

Your lidar is mounted on a single joint (a continuous one?), which you want to sweep along an arc (ie: part of the circle described by a full rotation), with the "center" of the arc aligned with the direction in which the vehicle is moving. Is that correct?

We're going to need to know what sort of hardwareInterface you've added to your transmission.

If you have (let's say) a velocity interface, and would like to layer a position controller on-top, you'd use the velocity_controllers/JointPositionController. This takes in position setpoints (ie: radians) and outputs velocities (ie: rad/s).

Similarly: if you'd have an effort interface in your transmission, you'd use the effort_controllers/JointPositionController (which, as you may have guessed: takes in a position and outputs an effort).

I believe this would result in what you describe as a "encoder-motor joint".

See velocity_controllers/velocity_controllers_plugins.xml for a (very) terse description of the various velocity outputting controllers in ros_control.


Something to clear up for me now: you mention Twist, but then ask about a position controlled system. geometry_msgs/Twist encodes velocities, not positions. Could you clarify how this works in your system?


Edit:

By encoder-motor joint I mean that the encoder data (current position) has a direct effect on the motion of the motor (the joint). [..] Give the position to the joint, and have the joint move from where it is to the new position (which would allow the lidar to collect data during the motion).

This sounds like a regular, closed-loop position controller.

The Twist data is used for the diff_drive controller to drive the robot around. At the same time one of my lidar nodes subscribes to the topic on which the Twist message arrives, and based on the type of motion (drive straight, slight turn, rotate on the spot) the code determines the motion of the lidar. If robot is going straight the lidar sweeps directly in front, if robot is turning the lidar sweeps a bit of the front and the side, if robot is rotating on the spot the lidar rotates. The purpose here is to use the lidar to get the data that is most useful (ex: don't sweep behind the robot when we drive straight). I am trying to replicate this behavior (that works in the physical robot) in Gazebo.

Is this already a separate node (ie: the algorithm that controls the lidar based on vehicle ego motion)? If so, provided you use the same control interfaces and datastreams with your Gazebo simulation, this should work (ignoring tuning things to accommodate the slightly different behaviour between Gazebo and the real world of course).

I put <hardwareInterface>hardware_interface/VelocityJointInterface</hardwareInterface> in the urdf, and type: velocity_controllers/JointPositionController in the xacro file and everything seems to work.

Do you mean "the .yaml controller configuration file"? Because you don't typically configure controllers via/with/in a .xacro.

The only issue is that you cannot specify the direction. Whether you give a positive or negative radian value it moves to that point from its current position in the shortest path around the joint,

Yes, that would be the way the (closed-loop) position controllers are implemented.

and this doesn't work for my application as sometimes I need to sweep in an opposite direction. Is it possible to configure the direction of motion somehow?

Unfortunately I don't believe this is configurable. See here for the code which determines the shortest distance. For the average case this behaviour makes sense I would say.

Could you clarify what sort of behaviour you are looking for exactly?

Your lidar is mounted on a single joint (a continuous one?), which you want to sweep along an arc (ie: part of the circle described by a full rotation), with the "center" of the arc aligned with the direction in which the vehicle is moving. Is that correct?


Edit 2:

I have a feeling the best solution is to use a velocity control on the joint, and monitor the position of the joint. Then when the joint is in position, send a stop command. What do you think?

Yes, that does sound like a good approach.

I'd implement this in a separate node, which takes in the current heading of the vehicle, the incoming Twist and then calculates the desired arc lengths to achieve the sweep. A velocity controller would indeed be a good thing to have. To make sure it's actually closed-loop (and not a forwarding controller), be sure to give your Gazebo vehicle an effort interface for that joint.

If you want to make things nice, you could implement a custom ros_control controller. It could take in the same data and interact (ie: command) your joint directly, instead of via topics.