how to use rgbdslam
Hello,
I am new to SLAM. I downloaded rgbdslam and get it run by "roslaunch rgbdslam rgbdslam.launch" a window showed, saying "Waiting for monchrome image..." and "Waiting for depth image...", and it told me "Press Enter or Space to Start". I pressed space and enter, nothing happened.
I dragged a picture from folder explorer to "Waiting for monchrome image", nothing happened.
I have no kinect device, what I have is a stereo camera with two "eyes". it seemed I must provide rgbdslam point cloud and something else. how can I get them? In what format, by what means can I push the point cloud data( and so forth) to rgbdslam after I got them?
I got no idea even after I studied wiki.ros.org/rgbdslam several times, even after I read the ros tutorials. is the tutorials too rough or just am I stupid?
Help me out, please
Asked by vdonkey on 2014-04-17 20:38:39 UTC
Answers
It is possible to use stereo only with rgbdslam, I have tried it and it worked on gazebo stereo camera plugin, although results was bad, algorithm was hardly able to find matches between images
EDIT:
As Ken_in_JAPAN asked in comments I'll try to tell you how I did it. I take into account that you have already compiled and launch rgbdslam package. I used gazebo_ros_multicamera
plugin to generate synchronised image streams. Then I used stereo_image_proc
package to generate point clouds and redirected them to the rgbdslam package.
I digged into my workspace and found launch file I used, here is a part of it:
<param name="config/topic_image_mono" value=""/> <!--could also be color -->
<param name="config/topic_image_depth" value=""/>
<param name="config/topic_points" value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->
<param name="config/camera_info_topic" value="/front_cam/left/camera_info"/>
<param name="config/wide_topic" value="/front_cam/left/image_rect_color"/>
<param name="config/wide_cloud_topic" value="/front_cam/points2"/>
I left first three params empty and used only left image for colors and cloud topic from stereo_image_proc
. I think that was all required to at least start it up.
After fondling with algorithm parameters, I did not achieve any good results with rgbdslam
and later used viso2_ros
stereo odometry node which was able to match point clouds a lot better. At least this was the case for me.
I hope that helps at least a bit, you can ask me if something is unclear.
Asked by rock-ass on 2014-04-18 23:23:16 UTC
Comments
Could you describe the procedure of rgbslam with stereo camera for vdonkey and everyone? Thank you in advance!
Asked by Ken_in_JAPAN on 2014-05-03 04:23:43 UTC
Thanks @rock-ass! It sounds great!
Asked by Ken_in_JAPAN on 2014-05-03 06:56:18 UTC
a weak question: is "viso2_ros" a replacement of rgbdslam or stereo_image_proc? I run viso2_ros and got warning message "Visual Odometer got lost!" without any window showed. It needs stereo_image_proc but publish points2 as well. who will be responsible for showing the points for it? confused.
Asked by vdonkey on 2014-05-04 20:24:51 UTC
viso2_ros is not a replacement for rgbdslam, sorry I lead to this confusion. Rgbdslam is fully integrated SLAM solution, while viso2_ros only estimates odometry - it integrates camera motion, then it applies motion to the robot frame and calculates how robot has moved only from stereo images.
Asked by rock-ass on 2014-05-04 22:26:39 UTC
A lot thanks to Ken, rock-ass.
Since I have succeeded in viewing 3D images in rgbdslam under your kindly help, I decide to show my commands here. I am not confident I am using ROS in the right way, but "comment" has words limit.
I am using a 3D camera with two "eyes" but only one USB line. 640x480 is the max resolution supported for single eye, but 640x480 with both eyes will not work because of USB bandwidth limit. so 352x288 is the max for stereo here.
All packages I used are: usb_cam, camera_calibration, camera_calibration_parsers, stereo_image_proc, image_view(optional), rgbdslam. usb_cam is the camera driver, and I used camera_calibration to calibrate my stereo cameras.
Open 1st terminal, run
roscore
Open 2nd terminal, run (drive the left eye)
rosparam set /stereo/left/video_device /dev/video2
rosparam set /stereo/left/image_width 352
rosparam set /stereo/left/image_height 288
rosparam set /stereo/left/pixel_format yuyv
rosparam set /stereo/left/camera_name left
rosrun usb_cam usb_cam_node /usb_cam:=/stereo/left/
Open 3rd terminal, run (drive the right eye)
rosparam set /stereo/right/video_device /dev/video1
rosparam set /stereo/right/image_width 352
rosparam set /stereo/right/image_height 288
rosparam set /stereo/right/pixel_format yuyv
rosparam set /stereo/right/camera_name right
rosrun usb_cam usb_cam_node /usb_cam:=/stereo/right/
Open 4th terminal, run (do calibration)
rosrun camera_calibration cameracalibrator.py --size 7x6 --square 0.049 left:=/stereo/left/image_raw right:=/stereo/right/image_raw left_camera:=/stereo/left right_camera:=/stereo/right --approximate=0.01 --no-service-check
notice that, before calibration, left and right camera will show warning messages saying cannot find some yaml files. that is ok at this point.
click "calibrate" button after it is enabled after taking 40 or more chessboard pictures. wait patiently. after quite a while, the calibration is done, check the epi value in top-right corner (let the camera see the chessboard again). if the epi is less than 2.5 it is acceptable, better value under 1.0. Then click "save". "commit" is a terrible button for me, because it will sometimes make my ubuntu reboot. just ignore the "commit" button.
"save" gives a /tmp/calibration.tar.gz(file name not sure). extract ost.txt out to /tmp/ost.ini, cut the ini file into two files manually half-half, for example /tmp/ost_left.ini and /tmp/ost_right.ini, then convert the ini files into yamls:
rosrun camera_calibration_parsers convert /tmp/ost_left.ini ~/.ros/camera_info/left.yaml
rosrun camera_calibration_parsers convert /tmp/ost_right.ini ~/.ros/camera_info/right.yaml
restart 2nd and 3rd terminal's command(ctrl+c and run again), this time there should be no warning messages
Open 5th terminal, run (mainly generate point cloud and disparity)
rosparam set /stereo/stereo_image_proc/approximate_sync true
ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_image_proc
the calibration must be done, or, stereo_image_proc will complain camera not calibrated
Open 6th terminal, run (showing left, right and disparity runtime images)
rosrun image_view stereo_view stereo:=stereo image:=image_rect _approximate_sync:=true
here is the chance to see disparity image, if the disparity is not good enough,it is a good idea to retry calibration with a larger chessboard (I got a low epi value after changing A4 chessboard to A3)
it seemed that image_view process must be killed before running rgbdslam.
Open 7th terminal, run
rosparam set /rgbdslam/config/camera_info_topic /stereo/left/camera_info
rosparam set /rgbdslam/config/wide_topic /stereo/left/image_rect_color
rosparam set /rgbdslam/config/wide_cloud_topic /stereo/points2
rosrun rgbdslam rgbdslam
/stereo/left/image_rect_color seems no different to /stereo/left/image_mono. press space key when rgbdslam window is active. if you see nothing, check if image_view is still running.
That is all I have done. but I am still confused with two questions:
1.the bottom-center image in rgbdslam window(where "waiting for depth image" showed before topic set), shows me a quite strange disparity image, with a vertical split line on it.
2.the 3D reconstruction process often stops, even I rotate the camera very slowly. that makes rgbdslam practically unusable for me.
Asked by vdonkey on 2014-05-04 16:58:38 UTC
Comments
The 2nd statement also applied to me when I was testing rgbdslam and then I stopped. I think rgbdslam is optimised for kinect sensor only, which gives a very detailed depth image. Stereo only is not so detailed and I think that's why it failes to reconstruct environment
Asked by rock-ass on 2014-05-04 22:31:08 UTC
Comments
Look at rgbdslam\launch folder, there will be a few launch files, one of them will contain all possible properties which you can use as a template for sturtup, just fill in topics with point clouds and image streams from your camera and you should be good to go.
Asked by rock-ass on 2014-04-17 20:43:34 UTC
thanks for your reply. sorry I haven't catch the point yet. do you mean (param name="config/topic_points" ...) by "possible properties"?
the value is sth like (value="/camera/depth_registered/points") sounds like a folder. I didn't find it, should I create it? where? what to put inside?
Asked by vdonkey on 2014-04-17 21:09:46 UTC
Well you are looking in the right file. But that what sounds like a "folder" are ros topics. You just need a better background about ROS core features likes topics, those are used to exchange data between processes (they are called nodes). You read about them in http://wiki.ros.org/Topics
Asked by rock-ass on 2014-04-17 21:44:53 UTC
Hi, @vdonkey: I think It's difficult to execute rgbd slam at the point that you don't have a kinect. However You might be able to execute it if you refer this page ( http://wiki.ros.org/stereo_image_proc ). By the way, your ros distribution is fuerte, isn't it?
Asked by Ken_in_JAPAN on 2014-04-18 18:42:01 UTC
I'm sorry, most launch on this page ( http://alufr-ros-pkg.googlecode.com/svn/trunk/rgbdslam_freiburg/rgbdslam/launch/ ) file support kinect.
Asked by Ken_in_JAPAN on 2014-04-18 19:16:55 UTC
now I believe that there is still a long way to go before I can use rgbdslam. so I stopped to learn ROS basics. I learned catkin and finally have stereo_image_proc compiled. I was using fuerte but I changed to hydro to use catkin. Arigato to Ken
Asked by vdonkey on 2014-04-21 01:59:42 UTC
Hi, @vdonkey: Don't mention it! I also don't understand the framework of ROS when I was a beginner for ROS. I'm getting to understand it since I executed a gazebo simulator and moved a turtlebot2 by myself. I think I'm a beginner for ROS. Let's try everything! See you!
Asked by Ken_in_JAPAN on 2014-04-21 08:07:25 UTC
Hi, Ken. After studies these days, I can finally get disparity showed by "image_view", while "stereo_image_proc"(1) sending the topics to it. "camera_calibration", "usb_cam" are also used. but you know my target is "rgbdslam"(2), would you hint me how to link 1 and 2 ? will "rviz" get involved?
Asked by vdonkey on 2014-04-28 00:38:32 UTC
If you can work usb_cam and publish a topic about usb_cam or stereo, you can execute
rosrun rviz rviz
and you might see somthing on Rviz. To do it, you need to push an add button from left panel on rviz and choose an image. But, as I don't know thing you try, my answer might be wrong.Asked by Ken_in_JAPAN on 2014-04-28 19:19:52 UTC
Thanks Ken. I did see the clouds in rviz. I also succeeded to see the similar thing in rgbdslam. after I set wide_topic and wide_cloud_topic which were defaultly "/camera/rgb/image_rect_color" and "/camera/depth_registered/image_rect"
Asked by vdonkey on 2014-04-29 19:22:29 UTC
but unfortunately, centre image, the disparity image, has a strange vertical separate line on it. hard to describe. and the slam processing often stops, I don't know if the stop is relative to the separate line.
Asked by vdonkey on 2014-04-29 19:25:13 UTC
@vdonkey: I am happy as I could hear that you made progress. If you want to ask anybody your problem, I recommend that you post a new article on new page. If it's hard to explain it, you can post an image on the page.
Asked by Ken_in_JAPAN on 2014-04-30 03:56:30 UTC