ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Changing the joint position on a robot

asked 2021-04-13 09:41:37 -0500

psilva gravatar image

Hi there,

I have a robot description in xacro files which I would use to spawn in a gazebo world. I used this robot to explore and map the world with a given SLAM which uses the information coming from an RPLidar A3 mounted on my robot.

What I would like to do is to change the position/orientation of this sensor and compare the performance of the SLAM given each different position/orientation of the sensor. To achieve this I plan on using some kind of optimization algorithm to change the position/orientation and compare the results. However, to do this, I would need to write some kind of node that is able to access the robot description and change it.

I understand this problem is very similar with previous questions asked regarding the changing of tools in Industrial-type robots:

https://answers.ros.org/question/1973...

https://answers.ros.org/question/4981...

Since these are quite old posts, I would like to know if there as been a definite solution developed for this type of problem?

Thanks in advance.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
1

answered 2021-04-13 14:10:25 -0500

gvdhoorn gravatar image

I understand this problem is very similar with previous questions asked regarding the changing of tools in Industrial-type robots:

not entirely I believe.

The main issue with tool changing is that tools have geometry (ie: they have a shape) which must be represented and be accurate for things like motion planning while avoiding collisions.

You don't appear to have that requirement, as you're only moving a sensor frame.

What I would like to do is to change the position/orientation of this sensor and compare the performance of the SLAM given each different position/orientation of the sensor. To achieve this I plan on using some kind of optimization algorithm to change the position/orientation and compare the results. However, to do this, I would need to write some kind of node that is able to access the robot description and change it.

I'd suggest to not change the URDF at all, but to write a node which broadcasts the transform which connects your sensor to the rest of the robot.

As TF transforms can be updated real-time, you could have your node update the position of your sensor frame relative to the rest of your robot in real-time as well.

If I understand your description correctly, you'd:

  1. choose an orientation and location for your sensor frame
  2. broadcast that as a TF frame with your node
  3. run your experiment
  4. go to 1

Note: you're publishing a TF relative to some piece of fixed geometry on the robot. You wouldn't have to keep updating the location of your sensor frame in "the world". That's taken care of for you by the TF2 system.

edit flag offensive delete link more

Comments

Hmm i think you're right! I'll give it a shot for sure, ty!

psilva gravatar image psilva  ( 2021-04-14 03:29:12 -0500 )edit

I'm guessing the only downside would be that i won't see the sensor model itself anymore right?

psilva gravatar image psilva  ( 2021-04-14 03:44:26 -0500 )edit

If you include the sensor model in the urdf, but then not attach it using a joint but using a TF, you can still visualise the model.

So you wouldn't publish the sensor frame, but the TF between your robot and the model of the sensor (which would itself define the sensor frame (ie: the frame to which measurements are relative)).

RViz (and every other consumer of the URDF) probably won't be happy until you start publishing the TF, but it should work.

gvdhoorn gravatar image gvdhoorn  ( 2021-04-14 03:50:38 -0500 )edit

Hmm so if i comment out the joint and leave the link declared i get the following error Failed to find root link: Two root links found: [base_footprint] and [left_front_rplidar_link] Furthermore, I don't believe gazebo will care if I change the tf right? I think gazebo only looks at the sdf that is created with the spawner.py once and then never looks at it again. I think a solution would be to just do a test run -> change the urdf -> delete the robot model -> spawn new model

psilva gravatar image psilva  ( 2021-04-14 05:28:57 -0500 )edit

I'd sort-of missed the part where you're using Gazebo.

That complicates things a bit.

You could perhaps go into the direction of a Gazebo plugin which is responsible for updating the model, but I'm not sure how that would work.

If you're reusing the URDF for Gazebo as well, perhaps going the way of editing the .urdf/.xacro itself might be more scalable/less involved.

If you end up doing this, post it as an answer and accept your own answer.

I'll leave my answer but prefix it with a comment clarifying how Gazebo makes this more complex.

gvdhoorn gravatar image gvdhoorn  ( 2021-04-14 05:42:12 -0500 )edit

Will do. Following this, do you think it would be possible to spawn the robot and the sensor separately (but maintaining them connected). This way I could just delete the sensor model and spawn another one instead of spawning the whole robot all over again.

psilva gravatar image psilva  ( 2021-04-14 05:48:12 -0500 )edit

I would personally not do that, as I'm not sure it's going to work correctly (there are many parts of the infrastructure which may not like you doing that).

gvdhoorn gravatar image gvdhoorn  ( 2021-04-14 05:53:53 -0500 )edit

So just the whole robot would probably be safer?

psilva gravatar image psilva  ( 2021-04-14 05:57:27 -0500 )edit

Yes, I'd think so. But I haven't checked.

I would probably restart everything, just to make sure. Especially if you're going to use this setup for experiments: identical circumstances for each iteration would seem like they'd be important/required, and you can't guarantee that unless you shutdown and restart.

gvdhoorn gravatar image gvdhoorn  ( 2021-04-14 06:18:43 -0500 )edit

Okay, thank you! Btw, a little of topic, but to keep the same path for the robot with each run I was trying to record /cmd_vel topic and just replay it every time run a new test trail. However the robot is not following the same path. Is this known problem when replaying a /cmd_vel bag or could i be doing something wrong?

psilva gravatar image psilva  ( 2021-04-14 08:38:53 -0500 )edit

cmd_vel topics carry geometry_msgs/Twist messages in most cases.

Those encode velocities. Not positions.

So you're not recording a path, but the first time-derivative of one.

As friction is also simulated by Gazebo, and there are numerical inaccuracies (which also accumulate over time), it's very likely replaying velocities will not result in the exact same positions.

If you're only interested in comparing the output of SLAM algorithms, but not use them to navigate or explore each time (ie: you want to provide each algorithm with the exact same input), then I would not use Gazebo at all for your experiments.

Record a rosbag while navigating your simulation world. The .bag contains the sensor data.

Now for each algorithm, play back the .bag.

Each algorithm will get the exact same input, greatly increasing repeatability of your experiments, avoiding simulation overhead and also making my suggestion (using TF) possible again.

gvdhoorn gravatar image gvdhoorn  ( 2021-04-14 09:13:06 -0500 )edit

Yes i've done just that in my previous experiments but now the situation is different. With the results that I have obtained from the comparison of the different SLAMs i will now pick just one SLAM and try to improve it's performance by changing the position/orientation of the laser which will in turn change the /scan messages that I receive. Therefore, with every test loop (same SLAM different position/orientation of sensor) i still require gazebo to run. Since the /cmd_vel is changed i see that one option to keep an "similar" path with each run would be to record a number of goals and just replay them with some kind of planner. Also, maybe instead of replaying the /cmd_vel maybe i can record and playback the position of the robot over time?

psilva gravatar image psilva  ( 2021-04-14 10:05:23 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2021-04-13 09:41:37 -0500

Seen: 445 times

Last updated: Apr 13 '21