PointClud vision pipeline detects robot body as a part, instead of the actual box

asked 2019-03-11 02:27:12 -0500

artemiialessandrini gravatar image

Greetings!

Please, refer to the video's right side: pcl methods detect the box and I set some raw coordinate frame on it, so arm could move towards.

(There are some UR5 PID controllers issues in the left side, I dunno a solution, but it's not a question for now..)

By current code, arm tries to move to the detected box coordinate frame, but at some moment it detects the robot body itself to be a part and application fails. I'm getting the biggest cluster on the table.

I need to kinect only detect an actual box.

I believe, there's a way to stop detection node, while move_arm_one starts, but cannot get how to implement it.

Feel free to ask any questions! Thanks for your help in advance!

The code:

def move_arm_one():

pose_goal = geometry_msgs.msg.Pose()
pose_goal.orientation = orient
pose_goal.position.x = x
pose_goal.position.y = y
pose_goal.position.z = z + 0.2

roll, pitch, yaw = t.euler_from_quaternion(orient)
newroll = roll + pi # putting the hand in the inverse position
newquat = t.quaternion_from_euler(newroll, pitch, yaw)
pose_goal.orientation = Quaternion(newquat[0], newquat[1], newquat[2], newquat[3])

arm_group.set_pose_target(pose_goal)
plan1 = arm_group.plan()

arm_group.execute(plan1)
# rospy.sleep(5)

arm_group.set_start_state_to_current_state()
gripper_group.set_start_state_to_current_state()

# rospy.sleep(5)
# arm_group.set_start_state_to_current_state()
rospy.loginfo( " Planning arm init: Succeed")
print pose_goal

arm_group.go(pose_goal, wait=True)
arm_group.stop()
# rospy.sleep(1) 

if __name__ == '__main__':

moveit_commander.roscpp_initialize(sys.argv)
display_trajectory_publisher = rospy.Publisher('/move_group/display_planned_path',
                                               moveit_msgs.msg.DisplayTrajectory,
                                               queue_size=20)
rospy.init_node('moveit_grasp', anonymous=True)
rospy.loginfo("Starting grasp app")

robot = moveit_commander.RobotCommander()
scene = moveit_commander.PlanningSceneInterface()
arm_group = moveit_commander.MoveGroupCommander("manipulator")
gripper_group = moveit_commander.MoveGroupCommander("gripper")



arm_group.set_max_acceleration_scaling_factor(0.2)
arm_group.set_max_velocity_scaling_factor(0.2)

arm_group.set_planer_id = "RRTkConfigDefault"
arm_group.set_planning_time(20)

planning_frame = arm_group.get_planning_frame() # reference frame for robot
print " Reference frame: %s" % planning_frame
eef_link = arm_group.get_end_effector_link() # end-effector link
print " End effector link: %s" % eef_link
group_names = robot.get_group_names() # all the groups in the robot
print " Available Robot Groups:", robot.get_group_names()

# creating a tf listener
listener = tf.TransformListener()
rate = rospy.sleep(5.0) # filling up a tf buffer to localize a tfansform
rospy.loginfo("Filling up a buffer...")
try:
    ((x, y, z), orient) = listener.lookupTransform('/part', '/camera_rgb_optical_frame', rospy.Time(0))
    rospy.loginfo("Position %s; Orientation %s", (x, y, z), orient)
except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException):
    rospy.logerr("Transform failed.")

# add movements
open_gripper()
move_arm_one()
pregrasp_gripper()
open_gripper()

rospy.spin()
moveit_commander.roscpp_shutdown() # roscpp_shutdown()
edit retag flag offensive close merge delete