Accuracy of ur5 gazebo simulation compared to real robot hardware

asked 2020-09-08 00:56:21 -0500

roboticist10498 gravatar image

Does anyone have data, or just insight, on the transferability of developing movement algorithms for a simulated UR5 and then transitioning to the real hardware? Apologies if this has been asked or a research paper has been published already, I did check but couldn't find anything.

I am thinking to use MoveIt! + Gazebo simulator of ur5 with joint state controller. The robot will have a tool as the end effector but it won't interact with anything, just moving the tool around objects.

I could imagine that the more you can get MoveIt!, or other inverse kinematics libraries that are not inside the UR control box, to generate low-level details of the trajectory (joint states rather than cartesian positions), then the more accurate the simulated behaviour will be, because there are fewer points of divergence between the two setups. But, still, simulation properties e.g. inertia or gravity compensation would still not fully represent reality.

My motivation for using simulation is to check the movements of the robot will not cause harm or collision with the objects in the workspace. I have seen in past experiences that requesting an end effector move just 1cm to the right can make an arm rotate in strange ways through space in order to reach it, and that is the main thing I want to filter out with simulation.

I considered if the RViz plugin that visualises the proposed trajectories would be enough for that purpose, since I don't want to _interact_ with anything in the environment. I think that would work well for on-line robot control of individual steps, if someone is there to approve each step before it gets executed. I would prefer to prepare the complete trajectories to execute ahead of time, and then let someone run the same code on the robot by just replaying those approved movements.

Thanks for the input!

edit retag flag offensive close merge delete