ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

How to start Kinect Laser data in Turtlebot without gmapping

asked 2013-04-13 01:24:49 -0500

Anis gravatar image

updated 2016-10-24 09:10:00 -0500

ngrennan gravatar image

Hello

We would like to make a program for Turtlebot to avoid obstacle using Kinect Laser data. We launched the minimal.launch file but the topic /scan is not published. When we launch the gmapping demo, we found that the topic /scan becomes published and we can get laser data.

The question is whether it is possible to get laser data from Kinect on Turtlebot without having to run gmapping demo.

Any help is appreciated

Thanks

Anis

edit retag flag offensive close merge delete

4 Answers

Sort by ยป oldest newest most voted
2

answered 2013-04-14 01:20:54 -0500

Anis gravatar image

Thanks Chad and Davesana

We tried the solution of Davesana first by executing the above launch file, but there was a problem that the /scan topic is still not published. However, we run the command

rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw

in another terminal, and it worked.

We also tried Chad solution.

when running the command roslaunch turtlebot_bringup 3dsensor.launch

it does not publish the /scan topic. Also, even when we executed the command

rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw

in another terminal. It did not worked too.

We actually replaced the command 3dsensor.launch by the command

roslaunch openni_launch openni.launch

and it worked with the rosrun depthimage_to_laserscan.

Thanks

Anis

edit flag offensive delete link more

Comments

Both the solutions work. However, for more accurate data, it is better to use pointcloud_to_laserscan.

Hemu gravatar image Hemu  ( 2013-04-14 16:15:01 -0500 )edit

Hi Hemu, what issues are you seeing that cause inaccurate data for depthimage_to_laserscan? It uses MUCH less CPU, and it's strongly recommended that it is used. Please file an issue with a way to reproduce the inaccurate data: https://github.com/ros-perception/depthimage_to_laserscan

Chad Rockey gravatar image Chad Rockey  ( 2013-04-15 09:26:50 -0500 )edit

Hi Chad, its true that depth image processing uses much less CPU. The data inaccuracy is specific to the application. For avoiding obstacles, I used (x^2+z^2 ) as the robot mostly moves in the x-z plane (taking kinect as the reference frame) as I wanted to find the distance between robot's base and

Hemu gravatar image Hemu  ( 2013-04-15 15:29:44 -0500 )edit

an object. The depth image to laserscan provides (x^2+y^2+z^2) which is the distance between the object and the kinect and not the distance between robot's base and object. Apart from this, there is no such inaccuracy.

Hemu gravatar image Hemu  ( 2013-04-15 15:34:22 -0500 )edit
5

answered 2013-04-13 08:00:26 -0500

Chad Rockey gravatar image

http://www.ros.org/wiki/depthimage_to_laserscan#depthimage_to_laserscan-1

roslaunch turtlebot_bringup 3dsensor.launch

rostopic echo /scan

See the launch file here: https://github.com/turtlebot/turtlebot/blob/master/turtlebot_bringup/launch/3dsensor.launch#L79


Or if you have the kinect up but not the laser, you can try:

rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw

http://www.ros.org/wiki/depthimage_to_laserscan#depthimage_to_laserscan-1

edit flag offensive delete link more
4

answered 2013-04-13 01:46:17 -0500

Devasena Inupakutika gravatar image

updated 2013-04-14 15:36:40 -0500

Hi,

For this you need to use PointCloud to LaserScan package. It comes with turtlebot stack. You need to create a launch file say kinect_laser.launch with below nodes:

<launch>
  <!-- kinect and frame ids -->
  <include file="$(find openni_launch)/launch/openni.launch"/>

  <!-- openni_manager -->
  <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>

  <!-- throttling -->
  <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager">
    <param name="max_rate" value="2"/>
    <remap from="cloud_in" to="/camera/depth/points"/>
    <remap from="cloud_out" to="cloud_throttled"/>
  </node>

  <!-- fake laser -->
  <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager">
    <param name="output_frame_id" value="/camera_depth_frame"/>
    <remap from="cloud" to="cloud_throttled"/>
  </node>
</launch>

This will publish /scan topic without the need for gmapping. And then you can use this launch file in your slam.launch.

Hope this helps !!

P.S. Please make sure the name of openni launch file.. In my case it's present as mentioned in the above location.

edit flag offensive delete link more
0

answered 2015-01-20 20:18:09 -0500

tanghz gravatar image

updated 2015-01-20 20:18:34 -0500

Hi, I find an easy way to get the Kinect Laser data.

First, create a launch file, for example,named kinect_laser.launch.

Then, open it and copy the code below in.

<launch>
  <include file="$(find turtlebot_bringup)/launch/3dsensor.launch">
    <arg name="rgb_processing" value="false" />
    <arg name="depth_registration" value="false" />
    <arg name="depth_processing" value="false" />
    <arg name="scan_topic" value="/scan" />
  </include>
</launch>

Thus, we can get the laser data from the scan topic.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-04-13 01:24:49 -0500

Seen: 5,400 times

Last updated: Jan 20 '15