ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Custom inverse kinematics applied to a robot

asked 2020-12-22 11:22:14 -0500

lucasrebra gravatar image

Hi developers

I am working in a project where I need to implement my own inverse and forward kinematics to a robotic model. I have the robot simulated in gazebo and I want to send some commands that are filtered by my inverse kinematic and send the commands to the simulation. Does someone has an idea on how to do it? I don't find any documentation in the internet on how to do it and I'm starting to feel worried. I want to know how I could test them because I think it's the base for robotics and that would help me to understand what I'm doing better

Thanks for reading it and hope you could answer my question.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2020-12-22 13:06:58 -0500

RobertWilbrandt gravatar image

Hey, the basis of all your robot movements will be ros_control, a very generic ROS framework for controlling actuators. If you already have your robot working in gazebo, you are probably aware of the way ros_control interacts with gazebo (see this tutorial for more information). Based on this, there are several levels where you could start integrating a custom FK/IK:

  1. The most simple case: You just want to set a cartesian waypoint and move the robot to the corresponding joint configuration given by your IK. In this case, you want to load a position forward controller (like the the e.g. an effort_controllers/JointGroupPositionController), which just listens for an array of target joint positions on a command topic and tries to move there with little PID logic. You would then wrap your IK inside a custom node, which would listen for cartesian targets and then send joint configuration targets to the controller.
  2. If you want to execute complete cartesian trajectories, you might want to load a joint_trajectory_controller and again implement your IK in a different node. This node would now convert full cartesian trajectories into series of joint configurations and send them to the controller, which would take care of interpolation between the points. Caution: you still need to do time parametrization yourself here (i.e. determine how long moving from one point to the next will take).
  3. If you are at some point really comfortable in your ros_control knowledge, you could use both your IK and your FK to implement a custom ros_control controller, which would then directly directly run in the robot control loop. Examples for this approach can be found in the cartesian_controllers package.
  4. If you don't really want to dive into the actual control side, but instead want to do real manipulation using your new tools and even compare them to state-of-the-art IKs, you might want to take a look at MoveIt and specifically their plugin system for inverse kinematics. With this, you would take a huge step forwards in capabilities (and complexity) and get everything around the basic IK (i.e. Collision-free planning, time parametrization and many advanced manipulation features) "for free". Of course, getting into MoveIt is a bit of a learning curve all by itself, and it also uses ros_control "under the hood" to send commands to actuators (and get feedback from them).

Some general remarks: I think if your main goal is to learn about FK/IK implementation, stick with 1 and 2. You won't get around learning a bit of how ros_control works, as this is the basis of most actuation in ROS (and the control methodology used in Gazebo), but this should be manageable. There are some great tutorials out there, in addition i would just recommend taking a look at rqt_controller_manager when you get to fiddling around with your simulation.

Your IK will be much more useful then your FK in most cases, as ROS typically uses robot_state_publisher for all its FK needs. If you can ... (more)

edit flag offensive delete link more


Of course, getting into MoveIt is a bit of a learning curve all by itself, and it also uses ros_control "under the hood" to send commands to actuators (and get feedback from them).

Pedantic, but MoveIt does not use ros_control directly. It can be configured to do so with special "controllers" on the MoveIt side, but typical configurations will use an Action client to talk to a JointTrajectoryAction server.

And also please the precise when talking about ros_control: ros_control itself does not run any controllers, nor does it talk to robots or do FK/IK. Nodes hosting ros_control-based hardware_interfaces and ControllerManagers can do these things, but only when written and configured as such.

gvdhoorn gravatar image gvdhoorn  ( 2020-12-23 13:57:32 -0500 )edit

If you want to execute complete cartesian trajectories, you might want to load a joint_trajectory_controller and again implement your IK in a different node. This node would now convert full cartesian trajectories into series of joint configurations and send them to the controller

Unfortunately the joint_trajectory_controller does not support Cartesian trajectories.

As its name implies, it executes joint trajectories, ie: trajectories in joint space.

gvdhoorn gravatar image gvdhoorn  ( 2020-12-23 14:00:35 -0500 )edit

@RobertWilbrandt thank you a lot for your response, It's really complete and I Will try this days to do some of them un my project. You open a little bit my mind about it. My robot has a close kinematic chain so that's the reason I want to build It with my own kinematics. I have theoretical knowledge on kinematics but working with ROS still continue being a little bit difficult for me.

The problem about using moveit is that working with close chains is difficult and there isn't too much information about it. I thought about using a plugin but I don't know where I can learn about the plugins that moveit use for inverse kinematics and how I can change them for implementing that in my case. Do you think It's a good idea?

I take into account your words and I will ...(more)

lucasrebra gravatar image lucasrebra  ( 2020-12-23 15:56:39 -0500 )edit

@gvdhoorn Oh right, i should have been a bit more precise with with node actually hosts the controllers. As for joint_trajectory_controller: What i meant was the same basic procedure as in option 1, meaning: Custom IK node receives cartesian trajectory, converts it to full joint trajectory, and sends this to the joint_trajectory_controller for execution (this is also why i mentioned time parametrization, which this custom IK would have to do by itself). Maybe i should have been more explicit about this.

@lucasrebra If you need some inspiration for messages for these approaches, take a look at this design document which reviews the commonly used types for this and seems like something that could become the basis for future ROS/ROS2 based cartesian interfaces (see here and the associated issues for some discussion).

RobertWilbrandt gravatar image RobertWilbrandt  ( 2021-01-06 09:02:29 -0500 )edit

@lucasrebra ah i see, closed chains can be... tricky. A first step i would suggest: Think really hard if you cannot in any way convert this into a serial chain using the urdf mimic tag. I just saw your previous question on this topic where you said this was not possible, but i cannot quite see what should not work here - i have previously worked with a parallel linkage mechanism (in the torso of a humanoid robot) which worked just fine. I'll try writing together something over there once i have some time.

RobertWilbrandt gravatar image RobertWilbrandt  ( 2021-01-06 09:20:00 -0500 )edit

Looking back at moveit plugins: It seems i actually forgot about one perhaps easier solution. There already is a srv_kinematics_plugin, which allows you to implement your IK in a different node (this seems to specifically target "difficult" kinematics). For this you would just have to provide a service of type moveit_msgs/GetPositionIK. This could make your task much easier, as most of the boilerplate is already implemented for you. On the MoveIt side, you would just have to select this plugin for your move group (either in the setup manager or via config).

RobertWilbrandt gravatar image RobertWilbrandt  ( 2021-01-06 09:26:16 -0500 )edit

If you decide to actually implement a custom plugin: I have only ever implemented "simpler" plugins (like MoveItSensorManager), but my main advise: Look at other implementations. Specifically srv_kinematics_plugin should help a lot when cross-referencing, combined of course with the interface specification. This goes for both pure source code and the associated build system and integration, e.g. the proper exporting via pluginlib. At least you will have some different reference implementations, but this will still be the most boilerplate- and work-intensive path to choose, with the least chance of generating easily reusable code (this is just personal experience...). This is also why i would look really hard if any other way works.

RobertWilbrandt gravatar image RobertWilbrandt  ( 2021-01-06 09:34:08 -0500 )edit

@RobertWilbrandt Ok I see that working over the example of srv_kinematics_plugin would be much easier and I think is a good way for working with plugins. I know there are examples, but sometimes It's a little bit difficult for me to understand some mechanisms of ROS (and plugins are a little bit different from the rest I think).

Talking about the mimic: I tried to implement It and I found imposible to implement the relation between the elbow and shoulder joint as the real robot movement. Probably I'm not doing It properly because, as you say, there are lot of people working with 4 bars. My robot is a sainsmart 6dof, could you give me some advices just for the URDF and the rviz representation?

Thank you so much

lucasrebra gravatar image lucasrebra  ( 2021-01-06 10:02:28 -0500 )edit

Question Tools



Asked: 2020-12-22 11:22:14 -0500

Seen: 1,182 times

Last updated: Dec 22 '20