ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

how to use rgbdslam

asked 2014-04-17 20:38:39 -0500

vdonkey gravatar image

updated 2016-10-24 08:36:53 -0500

ngrennan gravatar image

Hello,

I am new to SLAM. I downloaded rgbdslam and get it run by "roslaunch rgbdslam rgbdslam.launch" a window showed, saying "Waiting for monchrome image..." and "Waiting for depth image...", and it told me "Press Enter or Space to Start". I pressed space and enter, nothing happened.

I dragged a picture from folder explorer to "Waiting for monchrome image", nothing happened.

I have no kinect device, what I have is a stereo camera with two "eyes". it seemed I must provide rgbdslam point cloud and something else. how can I get them? In what format, by what means can I push the point cloud data( and so forth) to rgbdslam after I got them?

I got no idea even after I studied wiki.ros.org/rgbdslam several times, even after I read the ros tutorials. is the tutorials too rough or just am I stupid?

Help me out, please

edit retag flag offensive close merge delete

Comments

1

Look at rgbdslam\launch folder, there will be a few launch files, one of them will contain all possible properties which you can use as a template for sturtup, just fill in topics with point clouds and image streams from your camera and you should be good to go.

rock-ass gravatar image rock-ass  ( 2014-04-17 20:43:34 -0500 )edit

thanks for your reply. sorry I haven't catch the point yet. do you mean (param name="config/topic_points" ...) by "possible properties"? the value is sth like (value="/camera/depth_registered/points") sounds like a folder. I didn't find it, should I create it? where? what to put inside?

vdonkey gravatar image vdonkey  ( 2014-04-17 21:09:46 -0500 )edit
1

Well you are looking in the right file. But that what sounds like a "folder" are ros topics. You just need a better background about ROS core features likes topics, those are used to exchange data between processes (they are called nodes). You read about them in http://wiki.ros.org/Topics

rock-ass gravatar image rock-ass  ( 2014-04-17 21:44:53 -0500 )edit

Hi, @vdonkey: I think It's difficult to execute rgbd slam at the point that you don't have a kinect. However You might be able to execute it if you refer this page ( http://wiki.ros.org/stereo_image_proc ). By the way, your ros distribution is fuerte, isn't it?

Ken_in_JAPAN gravatar image Ken_in_JAPAN  ( 2014-04-18 18:42:01 -0500 )edit

I'm sorry, most launch on this page ( http://alufr-ros-pkg.googlecode.com/svn/trunk/rgbdslam_freiburg/rgbdslam/launch/ ) file support kinect.

Ken_in_JAPAN gravatar image Ken_in_JAPAN  ( 2014-04-18 19:16:55 -0500 )edit

now I believe that there is still a long way to go before I can use rgbdslam. so I stopped to learn ROS basics. I learned catkin and finally have stereo_image_proc compiled. I was using fuerte but I changed to hydro to use catkin. Arigato to Ken

vdonkey gravatar image vdonkey  ( 2014-04-21 01:59:42 -0500 )edit

Hi, @vdonkey: Don't mention it! I also don't understand the framework of ROS when I was a beginner for ROS. I'm getting to understand it since I executed a gazebo simulator and moved a turtlebot2 by myself. I think I'm a beginner for ROS. Let's try everything! See you!

Ken_in_JAPAN gravatar image Ken_in_JAPAN  ( 2014-04-21 08:07:25 -0500 )edit

Hi, Ken. After studies these days, I can finally get disparity showed by "image_view", while "stereo_image_proc"(1) sending the topics to it. "camera_calibration", "usb_cam" are also used. but you know my target is "rgbdslam"(2), would you hint me how to link 1 and 2 ? will "rviz" get involved?

vdonkey gravatar image vdonkey  ( 2014-04-28 00:38:32 -0500 )edit

2 Answers

Sort by ยป oldest newest most voted
0

answered 2014-05-04 16:58:38 -0500

vdonkey gravatar image

A lot thanks to Ken, rock-ass.

Since I have succeeded in viewing 3D images in rgbdslam under your kindly help, I decide to show my commands here. I am not confident I am using ROS in the right way, but "comment" has words limit.

I am using a 3D camera with two "eyes" but only one USB line. 640x480 is the max resolution supported for single eye, but 640x480 with both eyes will not work because of USB bandwidth limit. so 352x288 is the max for stereo here.

All packages I used are: usb_cam, camera_calibration, camera_calibration_parsers, stereo_image_proc, image_view(optional), rgbdslam. usb_cam is the camera driver, and I used camera_calibration to calibrate my stereo cameras.

Open 1st terminal, run

roscore

Open 2nd terminal, run (drive the left eye)

rosparam set /stereo/left/video_device /dev/video2
rosparam set /stereo/left/image_width 352
rosparam set /stereo/left/image_height 288
rosparam set /stereo/left/pixel_format yuyv
rosparam set /stereo/left/camera_name left
rosrun usb_cam usb_cam_node /usb_cam:=/stereo/left/

Open 3rd terminal, run (drive the right eye)

rosparam set /stereo/right/video_device /dev/video1
rosparam set /stereo/right/image_width 352
rosparam set /stereo/right/image_height 288
rosparam set /stereo/right/pixel_format yuyv
rosparam set /stereo/right/camera_name right
rosrun usb_cam usb_cam_node /usb_cam:=/stereo/right/

Open 4th terminal, run (do calibration)

rosrun camera_calibration cameracalibrator.py --size 7x6 --square 0.049 left:=/stereo/left/image_raw right:=/stereo/right/image_raw left_camera:=/stereo/left right_camera:=/stereo/right --approximate=0.01 --no-service-check

notice that, before calibration, left and right camera will show warning messages saying cannot find some yaml files. that is ok at this point.

click "calibrate" button after it is enabled after taking 40 or more chessboard pictures. wait patiently. after quite a while, the calibration is done, check the epi value in top-right corner (let the camera see the chessboard again). if the epi is less than 2.5 it is acceptable, better value under 1.0. Then click "save". "commit" is a terrible button for me, because it will sometimes make my ubuntu reboot. just ignore the "commit" button.

"save" gives a /tmp/calibration.tar.gz(file name not sure). extract ost.txt out to /tmp/ost.ini, cut the ini file into two files manually half-half, for example /tmp/ost_left.ini and /tmp/ost_right.ini, then convert the ini files into yamls:

rosrun camera_calibration_parsers convert /tmp/ost_left.ini ~/.ros/camera_info/left.yaml
rosrun camera_calibration_parsers convert /tmp/ost_right.ini ~/.ros/camera_info/right.yaml

restart 2nd and 3rd terminal's command(ctrl+c and run again), this time there should be no warning messages

Open 5th terminal, run (mainly generate point cloud and disparity)

rosparam set /stereo/stereo_image_proc/approximate_sync true
ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_image_proc

the calibration must be done, or, stereo_image_proc will complain camera not calibrated

Open 6th terminal, run (showing left, right and disparity runtime images)

rosrun image_view stereo_view stereo:=stereo image:=image_rect _approximate_sync:=true

here is the chance to see disparity image, if the disparity is not good enough,it is a good idea ... (more)

edit flag offensive delete link more

Comments

The 2nd statement also applied to me when I was testing rgbdslam and then I stopped. I think rgbdslam is optimised for kinect sensor only, which gives a very detailed depth image. Stereo only is not so detailed and I think that's why it failes to reconstruct environment

rock-ass gravatar image rock-ass  ( 2014-05-04 22:31:08 -0500 )edit
1

answered 2014-04-18 23:23:16 -0500

rock-ass gravatar image

updated 2014-05-03 05:27:57 -0500

It is possible to use stereo only with rgbdslam, I have tried it and it worked on gazebo stereo camera plugin, although results was bad, algorithm was hardly able to find matches between images

EDIT: As Ken_in_JAPAN asked in comments I'll try to tell you how I did it. I take into account that you have already compiled and launch rgbdslam package. I used gazebo_ros_multicamera plugin to generate synchronised image streams. Then I used stereo_image_proc package to generate point clouds and redirected them to the rgbdslam package.

I digged into my workspace and found launch file I used, here is a part of it:

<param name="config/topic_image_mono"              value=""/> <!--could also be color -->
<param name="config/topic_image_depth"             value=""/>
<param name="config/topic_points"                  value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->
<param name="config/camera_info_topic"             value="/front_cam/left/camera_info"/>
<param name="config/wide_topic"                    value="/front_cam/left/image_rect_color"/>
<param name="config/wide_cloud_topic"              value="/front_cam/points2"/>

I left first three params empty and used only left image for colors and cloud topic from stereo_image_proc. I think that was all required to at least start it up.

After fondling with algorithm parameters, I did not achieve any good results with rgbdslam and later used viso2_ros stereo odometry node which was able to match point clouds a lot better. At least this was the case for me.

I hope that helps at least a bit, you can ask me if something is unclear.

edit flag offensive delete link more

Comments

Could you describe the procedure of rgbslam with stereo camera for vdonkey and everyone? Thank you in advance!

Ken_in_JAPAN gravatar image Ken_in_JAPAN  ( 2014-05-03 04:23:43 -0500 )edit

Thanks @rock-ass! It sounds great!

Ken_in_JAPAN gravatar image Ken_in_JAPAN  ( 2014-05-03 06:56:18 -0500 )edit

a weak question: is "viso2_ros" a replacement of rgbdslam or stereo_image_proc? I run viso2_ros and got warning message "Visual Odometer got lost!" without any window showed. It needs stereo_image_proc but publish points2 as well. who will be responsible for showing the points for it? confused.

vdonkey gravatar image vdonkey  ( 2014-05-04 20:24:51 -0500 )edit

viso2_ros is not a replacement for rgbdslam, sorry I lead to this confusion. Rgbdslam is fully integrated SLAM solution, while viso2_ros only estimates odometry - it integrates camera motion, then it applies motion to the robot frame and calculates how robot has moved only from stereo images.

rock-ass gravatar image rock-ass  ( 2014-05-04 22:26:39 -0500 )edit

Question Tools

2 followers

Stats

Asked: 2014-04-17 20:38:39 -0500

Seen: 1,746 times

Last updated: May 04 '14