ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

ZeroCool's profile - activity

2019-06-06 00:06:22 -0500 received badge  Favorite Question (source)
2015-05-12 10:22:19 -0500 received badge  Great Question (source)
2014-11-03 04:10:17 -0500 received badge  Nice Question (source)
2014-01-28 17:26:08 -0500 marked best answer Amcl initialpose coordinate system

I'm still in the process of learning ros and i have a question regarding the initialpose topic. I'm transforming points from the kinect pcl to map coordinates (for sending goals) and using initalpose orientation to known which way is forward in a map. The problem is that when i get orientation with values: x,y,z = 0,0,-1. Why is z set and not x? Isn't the z axis the axis pointing up?

Thanks.

2014-01-28 17:26:07 -0500 marked best answer How to use initialpose orientation

i am still in the process of learning ros and i need some help. I have a task where i need to detect a obstacle(based on the obstacle color) from the pointcloud and move the turtlebot around the obstacle (also based on the color). The obstacles is a cylinder of a fixed radius (0.3m).

To acomplish this i have made one node which process the color and one which handles the navigation. When i detect a obstacle in the point cloud i convert the closest point into the map frame using a transform listener (this works). Then i use MoveBaseClient to send goals to the turtlebot.

The problem here is that i need to know where is the front and the back of the obstacle (so the turtlebot doesnt spin around the same obstacle). My solution to this problem is to set the initialPose (in rviz) which has the orientation perpendicular to the obstacle. This way i can compute 3 points: one in front of the obstacle, one on the left or on the right of the obstacle and one behind the obstacle. The problem is that this doesnt work as i planed and the points are not properly computed. The first problem is that the orientation i get is in form: x,y,z and only z is set. And also when i normalize the vector i lose the correct orientation (which is clearly seen below in the output). How can i construct a vector with the same direction as the orientation but of length of obstacle radius? The other thing is that the map itself has its own orientation.. do i have to consider that also? If someone could be so kind to help me understand what i am doing wrong. I have spent weeks on getting this to work.

The code for computing the points:

    // vector of the obstacles center (coordinates are in map frame)
    tf::Vector3 vec(ob->position.x, ob->position.y, 0);
    // vector of the initialPose orientation
    tf::Vector3 vec2(initialPose.orientation.x, initialPose.orientation.z, 0);
    // normalize the orientation vector
    vec2.normalize();
    tf::Vector3 res;
    // multiply the orientation vector with the obstacle radius
    vec2 = vec2 * (ob->radius);
    float angle,px,py; px = py = angle = 0;
    switch (pointIndex) {
       case 0: // point in front of the obstacle 
            // position + orientation
            res = vec - vec2;
          break;
        case 1: // point on the left/right of the obstacle
            if (ob->color == 2) {
                angle = 1.57079633f; // 90 degrees
             } else {
                angle = -1.57079633f; // - 90 degrees
             }
            // rotate the vector
            px = cos(angle) * vec2.getX() - sin(angle) * vec2.getY();
            py = sin(angle) * vec2.getX() + cos(angle) * vec2.getY();
            vec2.setX(px); vec2.setY(py);
            // position + orientation
            res = vec + vec2;
            break;
        case 2: // point behind the obstacle              
            // position + orientation
            res = vec + vec2;
            break;
    }
    // set the goal
    setGoal(res);

Output of the code:

    [ INFO] [1339592203.148984056]: new obstacle detected at X,Y,Z =                 
    [-2.574487,2.275727,0.000000] with color 2

    [ INFO] [1339592212.150162291]: Got orientation: 0.000000 0.000000 -0.376139 i ...
(more)
2014-01-28 17:25:49 -0500 marked best answer move turtlebot 1 meter forward

I would like to move my turtlebot forward for 1 meter (in the direction he is facing). The way i do it is like this:

  • subscribe to topic "/camera/depth_registered/points"
  • create a "fake point" (just like kinect would return. Im creating the fake point just to see if the point calculation is correct.)
  • transform the point into /map space
  • publish the point to /move_base_simple/goal

The code:

 geometry_msgs::PointStamped p;
 geometry_msgs::PointStamped p1;
 p.header.stamp = ros::Time();
 std::string frame1 = "/camera_depth_optical_frame";
 p.header.frame_id = frame1.c_str();

 p.point.x = 0;
 p.point.y = 0;
 p.point.z = 1; // 1 meter

 std::string frame = "map";

 try
 {
   listener.transformPoint(frame,p,p1);
 }catch(tf::TransformException& ex) { ROS_ERROR("exception while transforming..."); }

 // create message for move_base_simple/goal 
 geometry_msgs::PoseStamped msg;
 msg.header.stamp = ros::Time();
 std::string frame = "/map";
 msg.header.frame_id = frame.c_str();
 msg.pose.position = p1.point;
 msg.pose.orientation = tf::createQuaternionMsgFromYaw(0.0);
 publisher.publish(msg);

The problem is when i visualize the goal in rviz it is wrong so the robot moves into the wrong way. Can someone please explain how to fix this i'm struggling with this for days and im quite desperate.

Thanks!

2014-01-28 17:25:48 -0500 marked best answer Point Cloud point to map coordinates

Can someone please explain how can I transform a point (from a point cloud) to coordinate space of the map (which is built using gmapping)?

I've read the TF tutorials (http://ros.org/wiki/tf, http://ros.org/wiki/tf/Tutorials) but I still don't understand them well.

2014-01-28 17:25:36 -0500 marked best answer Turtlebot obstacles

I would like to make my turtlebot navigate (by itself) through a room (where i give him the destination). The current solution builds a map using gmapping and then I use rviz to give it a destination on the map.

It works but now i would like to add sticky notes (of different colors) on obstacles.

Turtlebot has to detect the sticky note (from point cloud) and move itself in the right direction (based on color. For example if the sticky note is red it should go to the right, if it is green it should go to the left).

Which packages can we use? Any recommendations?

I'm a beginner with ROS and I would appreciate a point in the right direction.

Thanks.

2013-09-26 21:17:11 -0500 received badge  Good Question (source)
2013-04-11 14:55:32 -0500 received badge  Taxonomist
2013-03-17 02:38:20 -0500 received badge  Nice Question (source)
2012-12-05 00:43:20 -0500 marked best answer Header frame_id

I am learning ros and I have a question regarding frame_id in the Header (like in PoseStamped). If I understand correctly frame_id means to which tf the position is relative to. So does that mean if I have a point which I get from kinect and I want to send the robot the goal with frame_id /camera_rgb_optical_frame and it will automaticly transform the point into the correct space? Or have I misunderstood?

2012-11-12 23:21:08 -0500 received badge  Popular Question (source)
2012-11-12 23:21:08 -0500 received badge  Notable Question (source)
2012-11-12 23:21:08 -0500 received badge  Famous Question (source)
2012-11-05 05:48:36 -0500 received badge  Popular Question (source)
2012-11-05 05:48:36 -0500 received badge  Famous Question (source)
2012-11-05 05:48:36 -0500 received badge  Notable Question (source)