ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Lorenzo's profile - activity

2014-01-28 17:28:01 -0500 marked best answer Kinect + gmapping

Hi everybody, I am trying to get the map of a room using the slam_gmapping node plus a kinect (the pointcloud is transformed in Laserscan with the pointcloud_to_laserscan package).

I am having some trouble... What is happening is that the new best pose calculated by the gmapping node is always quite far from the odometry (which is correct) and this brings to a wrong map as the robot "teleports" instantly from one position to another due to the correction.

Moreover I am getting (not so often) the following error:

[[31m[ERROR] [1319703421.207565904]: You requested a transform that is 17.136 \
seconds in the past,
but the tf buffer only has a history of 9.976 seconds.
 When trying to transform between /base_link and /odom.

As I have seen from the code this error is surely a problem because when it happens the transformation between map and odom is set to identity and this shifts everything (wouldn't be better to maintain the last transformation?). I have tried to slow down the /odom frame broadcaster down but it helps just a little.

Here is my launch file:

<launch>
        <node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
        <param name="map_update_interval" value="1.0"/>
        <param name="maxUrange" value="6.0"/>
        <param name="sigma" value="0.05"/>
        <param name="kernelSize" value="1"/>
        <param name="lstep" value="0.05"/>
        <param name="astep" value="0.05"/>
        <param name="iterations" value="5"/>
        <param name="lsigma" value="0.075"/>
        <param name="ogain" value="3.0"/>
        <param name="lskip" value="0"/>
        <param name="srr" value="0.01"/>
        <param name="srt" value="0.02"/>
        <param name="str" value="0.01"/>
        <param name="stt" value="0.02"/>
        <param name="linearUpdate" value="0.10"/>
        <param name="angularUpdate" value="0.0.05"/>
        <param name="temporalUpdate" value="8.0"/>
        <param name="resampleThreshold" value="0.5"/>
        <param name="particles" value="300"/>
        <param name="xmin" value="-10.0"/>
        <param name="ymin" value="-10.0"/>
        <param name="xmax" value="10.0"/>
        <param name="ymax" value="10.0"/>
        <param name="delta" value="0.05"/>
        <param name="llsamplerange" value="0.01"/>
        <param name="llsamplestep" value="0.01"/>
        <param name="lasamplerange" value="0.005"/>
        <param name="lasamplestep" value="0.005"/>
    </node>
</launch>

Any suggestions?

Thank you

2014-01-28 17:23:07 -0500 marked best answer Kinect amcl

Hi everybody! I am testing amcl on a KUKA youBot equipped with a Kinect. Following this tutorial http://www.ros.org/wiki/navigation/Tutorials/Navigation%20Tuning%20Guide I move the robot with a joypad on the known map to check the amcl.

This is what I obtain: http://www.youtube.com/watch?v=Bn0Bde0BjSs

As you can see the correction for translation is quite good, te problem is that odom doesn't turn enough to in order to correct odometry in rotation.

Here is my amcl file

<launch>
<node pkg="amcl" type="amcl" name="amcl">
  <!-- Publish scans from best pose at a max of 10 Hz -->
  <param name="base_frame_id" value="/base_footprint"/>
  <param name="odom_model_type" value="omni"/>
  <param name="odom_alpha5" value="0.5"/>
  <param name="transform_tolerance" value="0.2" />
  <param name="gui_publish_rate" value="10.0"/>
  <param name="laser_max_beams" value="60"/>
  <param name="min_particles" value="1000"/>
  <param name="max_particles" value="10000"/>
  <!--param name="kld_err" value="0.05"/-->
  <param name="kld_z" value="0.99"/>
  <param name="odom_alpha1" value="0.5"/>
  <param name="odom_alpha2" value="0.5"/>
  <!-- translation std dev, m -->
  <param name="odom_alpha3" value="0.5"/>
  <param name="odom_alpha4" value="0.5"/>
  <param name="laser_z_hit" value="0.95"/>
  <param name="laser_z_short" value="0.15"/>
  <param name="laser_z_max" value="0.03"/>
  <param name="laser_z_rand" value="0.01"/>
  <param name="laser_sigma_hit" value="0.002"/>
  <param name="laser_lambda_short" value="0.1"/>
  <param name="laser_model_type" value="likelihood_field"/>
  <param name="laser_likelihood_max_dist" value="4.0"/>
  <param name="update_min_d" value="0.05"/>
  <param name="update_min_a" value="0.05"/>
  <param name="odom_frame_id" value="/odom"/>
  <param name="resample_interval" value="1"/>
  <param name="recovery_alpha_slow" value="0.001"/>
  <param name="recovery_alpha_fast" value="0.1"/>
  <param name="initial_pose_x" value="0.35"/>
  <param name="initial_pose_y" value="1.08"/>
  <param name="initial_pose_a" value="1.57"/>
  <param name="first_map_only" value="true"/>
  <param name="laser_max_range" value="4.0"/>
  <param name="laser_min_range" value="0.6"/>
</node>
</launch>

The youBot odometry is not so good, so I gave a high weight to odom parameters and I tried to trust much more the laser with a very low "laser_sigma_hit" parameter.

Any suggesion to make it works better? Thank you guys

2014-01-28 17:22:58 -0500 marked best answer Multiple Kinect

Hi! I am trying to use two kinects, one to perform human tracking and object recognition, and one to navigate (using pointcloud to laserscan).

I modified the openni_node.launch with two nodes and I adjusted the kinect_frames.launch. Both kinect publish the rgb image, but just one (apparently randomly) publishes the pointcloud.

Here are the files I modified

openni_node.launch

<launch>
  <arg name="debug" default="false"/>
  <arg if="$(arg debug)" name="launch_prefix" value="xterm -rv -e gdb -ex run -args"/>
  <arg unless="$(arg debug)" name="launch_prefix" value=""/>
  <node pkg="openni_camera" type="openni_node" name="openni_node1" output="screen" launch-prefix="$(arg launch_prefix)">
    <!--param name="topic" value="first" /-->
    <!--param name="device_id" value="2@3" --> <!-- this line uses device on usb bus 2 and addres 3 -->
    <param name="device_id" value="A00366A12990044A"/> <!-- this line uses device with given serial number -->

    <!--param name="device_id" value="#1" /-->

    <rosparam command="load" file="$(find openni_camera)/info/openni_params.yaml" />
    <param name="rgb_frame_id" value="/openni_rgb_optical_frame" />
    <param name="depth_frame_id" value="/openni_depth_optical_frame" />
    <param name="use_indices" value="false" />
    <param name="depth_registration" value="true" />
    <param name="image_mode" value="2" />
    <param name="depth_mode" value="2" />
    <param name="debayering" value="2" />
    <param name="depth_time_offset" value="0" />
    <param name="image_time_offset" value="0" />
  </node>
    <node pkg="openni_camera" type="openni_node" name="openni_node2" output="screen" launch-prefix="$(arg launch_prefix)">
    <!--param name="topic" value="first" /-->
    <param name="device_id" value="A00363A00048052A"/>

    <rosparam command="load" file="$(find openni_camera)/info/openni_params.yaml" />
    <remap from="camera" to="camera_2"/>
    <param name="rgb_frame_id" value="/openni_rgb_optical_frame_2" />
    <param name="depth_frame_id" value="/openni_depth_optical_frame_2" />
    <param name="use_indices" value="false" />
    <param name="depth_registration" value="true" />
    <param name="image_mode" value="2" />
    <param name="depth_mode" value="2" />
    <param name="debayering" value="2" />
    <param name="depth_time_offset" value="0" />
    <param name="image_time_offset" value="0" />
  </node>
  <include file="$(find openni_camera)/launch/kinect_frames.launch"/>
</launch>

kinect_frames.launch

<launch>
<!--  <node pkg="tf" type="static_transform_publisher" name="base_link_to_openni" args="-0.04 0 0 1.57 -1.57 0 /base_link /openni_camera 100" />  -->
  <node pkg="tf" type="static_transform_publisher" name="kinect_fake_laser" args="-0.155 -0.14 0.19 -1.57 0 0  /base_link /base_laser 100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link" args="0.02 0 0 -3.14 0 0 /openni_camera /openni_depth_frame 100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link1" args="0.04 0 0 -3.14 0 0 /openni_camera /openni_rgb_frame 100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link2" args="0 0 0 3.14 0 0 /openni_depth_frame /openni_depth_optical_frame  100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link3" args="0 0 0 3.14 0 0 /openni_rgb_frame /openni_rgb_optical_frame 100" />

  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link0_2" args="-0.155 -0.14 0.19 3.14 0 -1.57 /base_link /openni_camera_2 100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link_2" args="0.02 0 0 -3.14 0 0 /openni_camera_2 /openni_depth_frame_2 100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link1_2" args="0.04 0 0 -3.14 0 0 /openni_camera_2 /openni_rgb_frame_2 100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link2_2" args="0 0 0 3.14 0 0 /openni_depth_frame_2 /openni_depth_optical_frame_2  100" />
  <node pkg="tf" type="static_transform_publisher" name="kinect_base_link3_2" args="0 0 0 3.14 0 0 /openni_rgb_frame_2 /openni_rgb_optical_frame_2 100" />

</launch>

Thank ... (more)

2014-01-28 17:22:45 -0500 marked best answer Navigation stack on youBot

Hi everybody, I am developing my final thesis using a KUKA youBot. I am managing to run the navigation stack on it, after many attempts, everything seems to work (no warning, no errors), but when I give through rviz a navigation goal this is what happens

http://www.youtube.com/watch?v=EjOxJTRCj7s

The green path is the full plan for the robot (topic: NavfnROS/plan), while the red path is the portion of the global plan that the local planner is currently pursuing (TrajectoryPlannerROS/global_plan). There should be also TrajectoryPlannerROS/local_plan in black, but it is not visible.

Here are the configuration files I am using:

costmap_common_params.yaml

obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[0.290,0.190], [0.290,-0.190], [-0.290,-0.190], [-0.290,0.190]]
#robot_radius: ir_of_robot
inflation_radius: 0.55

observation_sources: laser_scan_sensor

laser_scan_sensor: {sensor_frame: base_laser, data_type: LaserScan, topic: scan\
, marking: false, clearing: false}

#point_cloud_sensor: {sensor_frame: openni_rgb_optical_frame, data_type: PointC\
loud2, topic: camera/rgb/points, marking: false, clearing: false}
--------------------------------------------

global_costmap_params.yaml

global_costmap:
  global_frame: /map
  robot_base_frame: /base_footprint
  update_frequency: 5.0
  publish_frequency: 2.0
  static_map: true
  transform_tolerance: 0.8
(If a use a lower tolerance I obtain the warning concerning the costmap2dros transform timeout )
-----------------------------

local_costmap_params.yaml

local_costmap:
  global_frame: odom
  robot_base_frame: base_footprint
  update_frequency: 5.0
  publish_frequency: 2.0
  static_map: false
  rolling_window: false
  width: 6.0
  height: 6.0
  resolution: 0.10
  origin_x: 10
  origin_y: 10

base_local_planner_params.yaml

TrajectoryPlannerROS:
  max_vel_x: 0.07
  min_vel_x: 0.05
  max_rotational_vel: 0.07
  min_in_place_rotational_vel: 0.05

  acc_lim_th: 4.0
  acc_lim_x: 3.7
  acc_lim_y: 3.5

  yaw_goal_tolerance: 0.2
  xy_goal_tolerance: 0.2

  holonomic_robot: true
  y_vels: [-0.1, -0.05, 0.05, 0.1]

  dwa: true

The navigation is blind because in the task involved will not be dynamic obstacles, the only ones are the work space walls.

Am I missing something? What possibly is going wrong?

Thank you so much for the help

Lorenzo

2014-01-28 17:22:16 -0500 marked best answer Household_objects_database new entries

I would like to know about the possibility to add objects to the household_objects_database once it is downloaded and run on local machine. Is it possible for example using rein or other tools from the object_recognition stack?

Thank you so much

2013-10-28 01:36:44 -0500 received badge  Favorite Question (source)
2012-11-08 21:01:30 -0500 commented question Kinect + gmapping

Since I did it some time ago, I cannot remember well :) Whatever from the pointcloud_to_lasercan launch file I used: https://www.dropbox.com/s/mv7fg5pv2shvlk9/kinect_laser.launch

It seems to use the node of the nodelet package. Hope it will help

2012-11-08 20:51:56 -0500 commented answer Multiple Kinect

I wrote the code of the 2 services here, so you can have an idea on how to do it: http://dl.dropbox.com/u/369118/start_stop.cpp

2012-11-08 20:51:39 -0500 commented answer Multiple Kinect

Hi, what we did is using the built-in function of the openni_driver to create ros services to enable or disable the streams and we manage it at a higher level.

2012-09-10 05:23:47 -0500 received badge  Famous Question (source)
2012-09-05 04:39:05 -0500 received badge  Famous Question (source)
2012-08-27 15:42:45 -0500 received badge  Famous Question (source)
2012-08-15 05:39:10 -0500 received badge  Famous Question (source)
2012-07-31 12:05:20 -0500 received badge  Notable Question (source)
2012-07-14 22:03:53 -0500 received badge  Notable Question (source)
2012-07-11 20:13:24 -0500 received badge  Famous Question (source)
2012-04-30 21:48:14 -0500 received badge  Notable Question (source)
2012-03-27 02:09:10 -0500 received badge  Popular Question (source)
2012-03-06 23:25:10 -0500 received badge  Nice Question (source)
2012-03-06 22:35:46 -0500 commented answer Multiple Kinect

Yeah...we resolved the issue creating ROS services that activate and deactivate the kinect data stream, so we can switch at will from one kinect to the other.

2012-03-02 02:32:19 -0500 received badge  Notable Question (source)
2012-02-29 07:51:38 -0500 received badge  Notable Question (source)
2012-02-26 20:38:37 -0500 received badge  Popular Question (source)
2012-01-12 19:30:53 -0500 commented answer Household_objects_database new entries
Hi! We used 3dSmax to create the models in obj format, then we converted them in .ply format with the meshlab software. We also modified the insert_model.cpp code, but I don't remember exactly how...Try creating models as we did, as soon as I can I'll tell you how to modify the code, too.
2012-01-03 11:42:55 -0500 marked best answer Multiple Kinect

I believe the issue is that both nodes are publishing to the same topic simultaneously. Since the topic name of each node is not a parameter, you cannot change the topic in the standard way. Instead, you must use the following in your launch file:

<node pkg="openni_camera" type="openni_node" name="openni_node1" output="screen" launch-prefix="$(arg launch_prefix)">
   <remap from="camera/depth/points" to="camera/depth/points1" />
   ...
</node>

<node pkg="openni_camera" type="openni_node" name="openni_node2" output="screen" launch-prefix="$(arg launch_prefix)">
   <remap from="camera/depth/points" to="camera/depth/points2" />
   ...
</node>
2012-01-03 11:31:13 -0500 marked best answer Navigation stack on youBot

There are a couple of problems that I see in your parameter set:

1) If you really do intend to avoid obstacles based only on localization and a static map, you'll need to make sure that the local costmap is set up to run with that information. Otherwise, the local planner won't know about those static obstacles. If the map is small, you'll probably just be able to have the local costmap have the same parameters as the global costmap. However, if the map is going to be large, you'll probably want to feed the local costmap tiles of the global map to make sure computing the cost function for the map doesn't get too expensive.

2) The velocity limits you've set are extremely limiting. It'll be hard for the robot to follow any global path when its rotational velocity is limited to 0.07 radians/second and its translational velocity is limited to 0.07 meters/second. To make those limits work, I'd expect you'll have to play around a lot with the trajectory scoring parameters. Personally, I've never run navigation with such tight constraints.

2011-12-09 09:06:45 -0500 received badge  Popular Question (source)
2011-11-27 01:27:15 -0500 received badge  Nice Question (source)
2011-11-26 19:08:40 -0500 marked best answer Kinect amcl

Thanks for supplying the bag file; that always helps in trying to debug this sort of issue.

It looks like the robot is consistently underreporting the actual rotation. I watched the scans accumulate in the odom frame in rviz (as described here), and would estimate that it's underreporting by 5-10 degrees per rotation.

amcl's odometry model assumes zero-mean noise. It has a hard time correcting for systematic bias, which your robot seems to exhibit. If your robot is indeed always underreporting rotation (more data might be required to verify that), then I would try to correct the odometry before sending it to amcl. You might try a variation on the TurtleBot calibration, which uses a wall as a fixed feature to compare to. Or you might just inflate the reported yaw differences by 1-3%. Either way, if there's a consistent bias, it should be possible to do a one-time correction (not a periodic re-calibration). Such a fix could probably be incorporated into the YouBot odometry calculation somewhere.

If fixing the odometry is not feasible, I would try tuning amcl's odometry parameters. In this situation, odom_alpha1 is probably the most important one to increase, with odom_alpha4 coming next. I would leave the other parameters low (at their defaults).

I would not increase confidence on the laser model; the Kinect isn't really giving you high-quality laser scans. Instead, I would take the laser model configuration from turtlebot_navigation.

2011-11-24 03:44:46 -0500 received badge  Popular Question (source)
2011-11-09 00:16:39 -0500 received badge  Supporter (source)
2011-11-08 23:46:07 -0500 answered a question Kinect amcl

Thank you so much for your advice, we had the same sensation. Probably the odometry is not correct during rotation due to the changes in the payload carried by the robot.

I modified the turtlebot_calibration package in order to use it with my robot, obviously it does not solve the problem but it makes it happen later.

I expected the amcl node could handle this asset error during rotation, using the laser (for example in the same way it is done by the turtlebot_calibration ) to match the cells of the particle filter.

I would like to know if it is possible or not with amcl. Since it would not be possible I could insert periodic calibration during the robot movements.

Thank you for your attention :)

2011-11-04 22:39:22 -0500 answered a question Kinect amcl

link I have updated a file to make easier understanding what is happening. Just extract the folder in your ros package path and run in a terminal

rospack profile
roscd amcl_test_pkg
roslaunch amcl_test.launch
rosbag play amcl_bag.bag

If somebody could have a look at this it would be very appreciated :)

Thank you

Lorenzo