ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hi, I recently bought one of those robotic arms too. I used a PS3 six axis controller attached to the section of the arm that controls the angle of the gripper as an accelerometer to provide positioning feedback for the motors.

I'm not sure if your arm is maybe too inaccurate (especially when the joint angles can only be computed using some dead reckoning) to do that.

Martin Gunther

He is completely right. The arm is very inaccurate and the software provided to program the arms movements highlights that. I ran a few commands I saved and most of the time the robot was way off its original position when reversed and the joints tended to get to their maximum angles and grind their gears.

To use the PS3 controller I used a gamepad keyboard mapping program called MotionJoy. This enabled me to read movement when the controller tilted in different directions and map the keys to the keys that control the robot in the software provided with the kit. I was then able to control the robot by tilting the controler in the same direction (controlling the main arm motor and the base direction motor).

I then changed the mapped keys to control the middle arm section. I set the key to command the arm to move up to the downwards tilt of the controller and vice versa so that the arm stayed in a certain position.

In a previous project I used threads in C++ to be able to catch keys pressed for a game. I think that if you want a quick, kinda bodged solution you could use the same technique to control the arm. You could catch keys from the controller input and simulate keys to control the robot. So basicly using the controller just to tell the motors how far they have gone. This would provide very accurate movement of the robot for other tasks.

I am also trying to take it one step further by attaching a webcam to the head of the gripper section and using openCV to process the images to enable edge detection and colour blob tracking. This would be useful if you wanted your robot to automaticaly find certain objects and pick them up, sorting piles of objects for instance.

And even one step further.... lol, I think after all that I am going to use openGL to provide a 3D model of the robot that would move in real time using the controllers feedback. And for the icing on the cake, adding another webcam next to the first cam I could use openCV feature detection on both the images to distinguish how far an object is away from the cameras using the feature position difference in the images. http://en.wikipedia.org/wiki/Triangulation_%28computer_vision%29

So using a couple of webcams, a ps3 controller and the software to interface it all you could end up with a robot that can avoid objects and map objects positions into the openGL simulation. However, the 3D positioning from the cameras will be very innacurate, so will probably have to come up with something better for that.

I am gonna try make all this over summer, so if you want any of the software I make lemme know :)