ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

yangyangcv's profile - activity

2017-10-27 12:01:09 -0500 received badge  Good Answer (source)
2017-05-06 11:09:45 -0500 received badge  Good Question (source)
2017-04-26 11:38:29 -0500 received badge  Great Question (source)
2016-12-23 03:03:24 -0500 received badge  Favorite Question (source)
2016-08-30 00:29:51 -0500 marked best answer how to make two calls parallel?

in my code, i need to call two services. they are independent

client1.call(service1);
client2.call(service2);
do something after both of them get responses

with the above pseudo code, it seems that the two calls are executed sequentially, i.e., client2 will not execute until client1 gives a response. since the two services are provided by two nodes, i hope that there is a kind of non-blocking mode. i.e., after client1 calls the service, it goes directly to the next line to let client2 call service2. how can i fulfill this?

thanks.

2016-08-02 13:16:42 -0500 received badge  Famous Question (source)
2016-05-24 01:19:35 -0500 received badge  Nice Answer (source)
2016-03-15 19:53:49 -0500 received badge  Good Answer (source)
2015-12-15 12:13:36 -0500 received badge  Good Question (source)
2015-11-23 23:51:43 -0500 received badge  Famous Question (source)
2015-11-12 09:38:19 -0500 received badge  Popular Question (source)
2015-10-28 07:10:45 -0500 marked best answer Is there any workspace analysis tool in ROS for a robot arm?

I have a 7 DOF robot arm and i want to use it to do table top manipulations. so firstly i want to do a workspace analysis for the arm so that i can know how to place the table such that the arm can reach any point of the table. i've tried a brute-force way, that is, to increase each joint's rotation by a certain step and then use FK to get a position. but since my arm is of 7 DOF, this method is very very slow and the final result contains too many data for matlab to display. i also know that there is a monto-carlo based method, but don't know how to realize it. so i just wonder if there is any tool/package in ROS to do the workspace analysis. it would be even better to show the workspace in rviz. Thanks.

2015-08-10 06:04:43 -0500 received badge  Nice Question (source)
2015-05-22 09:33:19 -0500 received badge  Famous Question (source)
2015-03-26 18:54:03 -0500 received badge  Notable Question (source)
2015-03-26 18:54:03 -0500 received badge  Popular Question (source)
2015-03-09 10:09:15 -0500 received badge  Famous Question (source)
2014-11-19 21:56:22 -0500 received badge  Good Question (source)
2014-10-31 00:47:28 -0500 received badge  Enthusiast
2014-10-19 02:02:19 -0500 received badge  Nice Answer (source)
2014-09-05 15:08:27 -0500 received badge  Famous Question (source)
2014-09-05 15:08:27 -0500 received badge  Notable Question (source)
2014-08-12 05:13:05 -0500 received badge  Famous Question (source)
2014-06-24 16:40:05 -0500 received badge  Notable Question (source)
2014-04-20 13:30:34 -0500 marked best answer data losted when subscribe from a bag file

this bag file (T_01.bag)is publishing sensor_msgs::PointCloud2 on the topic "/points". i use the following code to subscribe to that topic and save the pointcloud to disk.

#include <ros/ros.h>
#include <pcl_ros/point_cloud.h>
#include <pcl/common/common.h>
#include <pcl/io/pcd_io.h>

int count = 0;
pcl::PCDWriter pcdWriter;

void cloudCallback(const sensor_msgs::PointCloud2ConstPtr& msg)
{
    pcl::PointCloud<pcl::PointXYZ>::Ptr pc_col(new pcl::PointCloud<pcl::PointXYZ>());
    pcl::fromROSMsg(*msg,*pc_col);
    std::cout<<"saving "<<count<<std::endl;
    std::stringstream ss;
    ss<<"point"<<count<<".pcd";
    pcdWriter.write(ss.str(),*pc_col);
    count++;
}

int main(int argc, char **argv)
{
  ros::init(argc, argv, "listener");
  ros::NodeHandle n;
  ros::Subscriber sub = n.subscribe("/points", 10, cloudCallback);

  ros::spin();

  return 0;
}

The problem is, there should be 4 pointclouds in this bag file. but most of the time my program can only record 3. Could anybody figure out why?

p.s. how to make the subscriber quit automatically when the publisher is terminated?

Thanks.

2014-04-20 13:29:06 -0500 marked best answer How to save file to the current directory with roslaunch?

i write a simple C++ program to save an image

cv::imwrite(filename,visual_img);

when i run it directly from Eclipse, the image is saved in the current directory (the directory where the Eclipse project file is in). but when i run it with roslaunch

<node pkg="testProg" type="testProg" name="testProg" />

the image is saved in home/.ros

so what should i do to save it to the current directory?

p.s. i encounter the same problem when i try to load the image. always fail to load if i put the image in the current directory when i run it by roslaunch.

2014-04-20 13:25:09 -0500 marked best answer How to do tabletop object manipulation for my own robot with ROS?

I have a robot arm of 7DOF. Now i want to use it to grasp tabletop objects like what pr2 does http://www.youtube.com/watch?v=eUpvbOxrbwY

According to my understanding and the wonderful tutorials http://www.ros.org/wiki/pr2_tabletop_manipulation_apps/Tutorials, basically there are four things to do: 1) object perception: determine where is the object. this is just the goal for the arm; 2) environment perception: this is to avoid collision between the arm and the environment; 3) path planning for the arm and trajectory filtering 4) move the arm to the goal according to the result of path planning.

so first of all, i want to realize the 3rd and 4th steps with my robot arm. that is, i manually assign the start and goal, and then move the arm to the goal according to the result of path planning. According to the tutorial in IROS2011 named Motion Planning for Real Robots(http://www.kavrakilab.org/iros2011), i create the URDF for my robot and i can do the motion planning with the Arm Navigation Warehouse Viewer. but with the warehouse viewer, i can only assign the start and the goal through the GUI. how can i assign the start and the goal programmically? in the IROS2011 tutorial, indeed there is a chapter named Programmatically Controlling the Robot Arm, but that is for PR2. i cannot figure out how to apply this to my robot arm. so till now, it seems that my question can be converted to: how to control my robot arm programmically? e.g., move the arm's each joint to a specified value. in fact, i've realized this with the robot state publisher according to this tutorial: http://www.ros.org/wiki/urdf/Tutorials/Using%20urdf%20with%20robot_state_publisher. so now let's back to the chapter of IROS2011 tutorial named Programmatically Controlling the Robot Arm. In the provided cpp file, it seems it only constructs the goal command, and then sends it out. The planning part and the arm action part are done with right_arm_navigation.launch under the package pr2_3dnav but i don't know how to fulfill this part for my robot arm.

sorry for my tedious description. here is a simple summary:

ultimate goal: i want to do motion planning for my robot arm with the start and goal assigned programmically.

what i've already done: 1) construct the URDF for my robot; 2) finish the motion planning with the Arm Navigation Warehouse Viewer; 3) move each joint to a specified position programmically with joint state publisher and robot state publisher.

question: how to fulfill my ultimate goal with all the achievements i've already finished?

thanks in advance for all the answers and comments.

2014-04-20 13:14:38 -0500 marked best answer Is there any docs about the principle of IK provided by KDL?

recently i'm using the inverse kinematics(IK) service provided by KDL. it works great! but i want to know its principles. is there any docs, papers about this?

thanks in advance.

2014-04-20 13:13:50 -0500 marked best answer could ompl deal with mobile manipulation?

dear all, i've finished the object manipulation with the amazing stack arm_navigation. that is, now my robot can grasp objects from a table while the robot base is fixed. however, due to the limited workspace of the arm, the robot can only grasp objects within a small region. it cannot even cover the whole table. so i'm wondering that, if we can plan a location for the robot base while do the arm motion planning? for example, the robot goes to the other side of the table to grasp objects there? according to my understanding about RRT, this is merely an addition of 3 degrees in the searching space, that is, the x,y, and rotation of the base. can we do this with the current planning_description_configuration_wizard? or does anybody have any hit for this task?

thanks in advance.

EDIT: dear pirate, although you've pointed a direction for me, i'm still confused about the details. do you mean that i need to modify the file MyRobot_planning_description.yaml generated by the wizard? if so, how should i modify? in the beginning of MyRobot_planning_description.yaml, it is:

multi_dof_joints:
  - name: world_joint
    type: Fixed
    parent_frame_id: base_link
    child_frame_id: base_link
groups:
  - name: left_arm
    base_link: base_link
    tip_link: left_link7

as you said, i need to add a P-R-P or P-P-R joint, where should i add this information?

besides, since i'm planning for the arm manipulation in the joint space instead of the task space, i need to first convert the goal pose to the joint space with IK. that means i must first determine the robot base's position, and then do the arm planning. in this way the whole planning is acturally split into two parts: first the planning for the base in SE(3), then the planning for the arm in the configuration space. however, i hope that the planner can treat the whole planning as one part.

don't know if i've put myself clearly :-( but i would really appreciate it if you could provide more details about your realization.

thanks again.

2014-04-20 13:13:14 -0500 marked best answer can we get the penetration depth with CollisionModels?

i find this method in planning_environment::CollisionModels

void getAllCollisionsForState(const planning_models::KinematicState& state, std::vector<arm_navigation_msgs::ContactInformation>& contacts, unsigned int num_per_pair = 1);

and the arm_navigation_msgs::ContactInformation do have a field named depth. but with my test, it's always equal to 0 even when two objects penetrating into each other deeply. why this method doesn't give correct depth? is it because ODE doesn't support it? and do we have other method to get the depth value?

thanks.

2014-04-20 13:12:47 -0500 marked best answer how to make Collision Markers visible in rviz?

according to this tutorial, i can successfully check the state validity. but i don't know what i should do to make the collision markers visible in rviz? do i need to manually publish a topic on the collision markers? i thought the /planning_scene_validity_server should do this job. i've already added "Marker" in rviz, but don't know which topic it should listen to.

thanks.

2014-04-20 13:12:33 -0500 marked best answer Why environment server does not keep the collision object?

according to my understanding, once we use the set_planning_scene_diff service to add a collision object to the environment_server, it will stay there until another call explicitly deletes it. so if we use a first call to add a cylinder, then we make another set_planning_scene_diff call, but this time, the call is filled with an empty planing scene diff, the cylinder will still be there. but with my test, it seems that is not the case. the following is my test.

i have two nodes. one node(let us call it nodeA) launches a launch file. The launch file is just like the one (myRobot_arm_navigation.launch) generated by the planning_description_configuration_wizard. i.e., it starts the environment_server, ompl, trajectory_filter, kinematics, etc. The other node(let us call it nodeB) is written by myself. it first adds a cylinder as an collision object to the environment_server(through the call of set_planning_scene_diff), and then makes a planning from the start state to the goal where the cylinder is. it works well. then i close nodeB, make some modification to the code(comment the part of adding the cylinder), and re-run it. so this time the set_planning_scene_diff is filled with an empty planing scene. but it seems that the cylinder dispears. is there anybody know why?

thanks.

2014-04-20 13:12:32 -0500 marked best answer why GetConstraintAwarePositionIK return a failure so quickly?

when using GetConstraintAwarePositionIK, i set the timeout to 5 seconds. but sometimes it returnes an error code "NO_IK_SOLUTION" very very quickly(less than 1 seconds). why does this happen? according to my understanding, if it cannot find a solution, it should keep trying until timeout.

thanks.