applying texplore rl to mobile robot and arm

asked 2017-02-10 08:38:51 -0600

mape1082 gravatar image

updated 2017-02-10 08:43:20 -0600

I have the following (robot in Gazebo simulation).

My goal in applying the TEXPLORE RL algorithm is to have the robot use its arm to help itself to climb the wall, by pulling from the horizontal bar shown.

I made some assumptions: The robot only moves in 2d (x, z), the gripper has vertical position and automatically will hold on to the bar when needed.

My states (four in total) are the (x, z) components of the gripper and the center of the mobile base.

My actions (7): Move the mobile base back or forward, or stop. Move the elbow, back or fw a fixed amount/step (radians) Move the arm, back or fw, a fixed amount/step (radians)

Parameters: Agent: --agent texplore --planner parallel-uct --nstates 10 Environment: --env jaguarclimbswall --stochastic –prints

I created this environment based on the RobotCarVel example. However, it does not have simulated physics on it, or calculations. It rather connects to Gazebo and queries the model (robot) as it would with a real robot.

The problem: I am having issue to have the algorithm converge to the goal. I am setting the reward to be the negative of sum of two distances: distance from mobile base to goal, and from gripper to goal (horizontal bar).

Any ideas?

Thanks!

edit retag flag offensive close merge delete