ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2017-01-14 12:47:48 -0500 | received badge | ● Nice Question (source) |
2015-08-20 11:34:22 -0500 | received badge | ● Taxonomist |
2015-05-11 22:59:18 -0500 | received badge | ● Student (source) |
2014-08-17 04:01:11 -0500 | received badge | ● Famous Question (source) |
2014-07-15 23:31:55 -0500 | received badge | ● Famous Question (source) |
2014-06-17 05:34:01 -0500 | received badge | ● Notable Question (source) |
2014-06-01 11:08:35 -0500 | received badge | ● Famous Question (source) |
2014-05-23 04:27:25 -0500 | received badge | ● Notable Question (source) |
2014-05-19 19:15:24 -0500 | commented question | Navigation + 3D SLAM + Kinect as far as I know, hector_mapping needs no odometery info as input, and it publishes odom. while gmapping needs odom as input, you should get that from sth like a wheel encoder. but kinect seems not able to satisfy hector mapping because of short sight(laser scan sensor might be ok) |
2014-05-14 08:23:53 -0500 | received badge | ● Popular Question (source) |
2014-05-13 23:34:22 -0500 | commented question | the goal is 2D navigation I tried kinect + rgbdslam and got a relative fine result. although kinect is still expensive than stereo camera. I am studying http://code.google.com/p/ucsb-ros-pkg because their video seemed to be what I want. is this a good way worth to try? |
2014-05-13 21:02:11 -0500 | received badge | ● Supporter (source) |
2014-05-13 19:47:00 -0500 | asked a question | the goal is 2D navigation Sorry, Please let me tell a small story. I am new to ROS, my final goal is "my robot can build a map automaticlly and walk to any demanded position". maybe in professional terms, what I want is SLAM and Navigation. a basic goal being achieved lots of times by most ROSers, isn't that true?
I think now I have 4 ways to try:
If you are familiar with 3(wheel encoder solution), could you show me some tutorials about how to program a driver to satisfy gmapping's /tf topic? sample is best(I have read http://wiki.ros.org/navigation/Tutori... ). Or, maybe I am walking on the wrong way from my objective. any hints are welcome. many many thanks. ROS is too difficult for me :( |
2014-05-12 14:45:27 -0500 | received badge | ● Notable Question (source) |
2014-05-11 18:08:34 -0500 | commented question | rosmake rgbdslam no idea. but maybe you can try catkin_make. copy rgbdslam_freiburg to your catkin home's src sub dir. run catkin_make at catkin home dir |
2014-05-08 16:42:23 -0500 | commented answer | how gmapping get odom data I hate tunning these parameters, too. but I just don't know how to provide data by /tf. I read the tutorials of tf(one turtle chasing another), and got some ideas that tf is used for converting x/y/z of one frame into another frame. but how to use it as a topic? please give me some help about tf. |
2014-05-08 16:36:26 -0500 | commented answer | how gmapping get odom data I tried hector_gmapping, but still get error: [ERROR] [1399602478.182731899]: Transform failed during publishing of map_odom transform: "base_link" passed to lookupTransform argument source_frame does not exist. after I set output_frame_id param of depthimage_to_laserscan to base_link |
2014-05-08 16:34:13 -0500 | received badge | ● Popular Question (source) |
2014-05-08 15:51:03 -0500 | commented answer | how gmapping get odom data @dornhege, Thank you very much. you showed me the way like the signpost at the crossroad, the lighthouse on the sea. |
2014-05-07 22:51:24 -0500 | asked a question | how gmapping get odom data In brief, I am going to use slam_gmapping with kinect. but the result is very bad. however I rotate, move the kinect, the result map in rviz just keep a fan-shaped area no more than 90 degree. I am not even very clear about what question to ask, I feel not mastering the whole thing :( so, maybe describe what I did in detail is a good idea. the question will be long, but easy to understand this way:
I got warning message at step 2, I hope it doesn't matter because I can see a fairly good "Image" of "/depth/image_raw" topic I also got warning at step 4 this warning cannot be ignored, since I cannot see any map showed in rviz after adding a "Map" subscribing "/map" topic. so I add this before step 4 camera_depth_frame seems to be the default frame id of /scan topic published by depthimage_to_laserscan package but after that, the warning message of gmapping changed into so I added this before step 4 this time I got no warning message, after setting base_frame and odom_frame, I have "Map", "LaserScan" showed in rviz. The problem is, nevertheless I rotate/move the kinect, the result map is always facing a same direction. of course so does the laser scan lines. I have some guesses/questions:
thank you for watching this long question. I hope I can understand the ... (more) |
2014-05-06 19:19:54 -0500 | received badge | ● Editor (source) |
2014-05-06 19:16:03 -0500 | answered a question | rgbdslam fuerte blackscreen Hello, Miss Meng. I was confused with the blackscreen, too, at the beginning. after a long time learning of the basic concepts of ROS(package, node, topic...etc), I got the idea that most package will subscribe some topics, and publish others, or show it on screen. I saw your screen capture, it seemed you have not opened other ubuntu terminal to run other package to provide topics needed to rgbdslam, I guess that is why you got blackscreen. I don't know if you want to use stereo camera or kinect to be the image source, maybe it can give some ideas to see my question, I used stereo cameras, and I answered myself at last with all the command I typed. |
2014-05-05 20:48:29 -0500 | commented question | Converting stereo_msgs/DisparityImage to sensor_msgs/Image Hi, have you work it out? I have the same question |
2014-05-05 20:48:29 -0500 | received badge | ● Commentator |
2014-05-04 21:55:26 -0500 | commented answer | cant locate the node rgbdslam I know now that use catkin_make is a better idea |
2014-05-04 20:24:51 -0500 | commented answer | how to use rgbdslam a weak question: is "viso2_ros" a replacement of rgbdslam or stereo_image_proc? I run viso2_ros and got warning message "Visual Odometer got lost!" without any window showed. It needs stereo_image_proc but publish points2 as well. who will be responsible for showing the points for it? confused. |
2014-05-04 16:58:38 -0500 | answered a question | how to use rgbdslam A lot thanks to Ken, rock-ass. Since I have succeeded in viewing 3D images in rgbdslam under your kindly help, I decide to show my commands here. I am not confident I am using ROS in the right way, but "comment" has words limit. I am using a 3D camera with two "eyes" but only one USB line. 640x480 is the max resolution supported for single eye, but 640x480 with both eyes will not work because of USB bandwidth limit. so 352x288 is the max for stereo here. All packages I used are: usb_cam, camera_calibration, camera_calibration_parsers, stereo_image_proc, image_view(optional), rgbdslam. usb_cam is the camera driver, and I used camera_calibration to calibrate my stereo cameras. Open 1st terminal, run Open 2nd terminal, run (drive the left eye) Open 3rd terminal, run (drive the right eye) Open 4th terminal, run (do calibration) notice that, before calibration, left and right camera will show warning messages saying cannot find some yaml files. that is ok at this point. click "calibrate" button after it is enabled after taking 40 or more chessboard pictures. wait patiently. after quite a while, the calibration is done, check the epi value in top-right corner (let the camera see the chessboard again). if the epi is less than 2.5 it is acceptable, better value under 1.0. Then click "save". "commit" is a terrible button for me, because it will sometimes make my ubuntu reboot. just ignore the "commit" button. "save" gives a /tmp/calibration.tar.gz(file name not sure). extract ost.txt out to /tmp/ost.ini, cut the ini file into two files manually half-half, for example /tmp/ost_left.ini and /tmp/ost_right.ini, then convert the ini files into yamls: restart 2nd and 3rd terminal's command(ctrl+c and run again), this time there should be no warning messages Open 5th terminal, run (mainly generate point cloud and disparity) the calibration must be done, or, stereo_image_proc will complain camera not calibrated Open 6th terminal, run (showing left, right and disparity runtime images) here is the chance to see disparity image, if the disparity is not good enough,it is a good idea ... (more) |
2014-04-29 19:25:13 -0500 | commented question | how to use rgbdslam but unfortunately, centre image, the disparity image, has a strange vertical separate line on it. hard to describe. and the slam processing often stops, I don't know if the stop is relative to the separate line. |
2014-04-29 19:22:29 -0500 | commented question | how to use rgbdslam Thanks Ken. I did see the clouds in rviz. I also succeeded to see the similar thing in rgbdslam. after I set wide_topic and wide_cloud_topic which were defaultly "/camera/rgb/image_rect_color" and "/camera/depth_registered/image_rect" |
2014-04-28 01:27:53 -0500 | commented question | Translating coordinate frames? Or RGBDSLAM axis orientation? Hello, Jon Stephan, I'm a beginner. I want to use stereo_image_proc(call it SIP) with rgbdslam too. I can see disparity from image_view receiving topics from SIP. what should be next? rgbdslam seems listening /camera/rgb/image_rect_color & /camera/depth_registered/image_rect, but SIP don't have them |
2014-04-28 00:38:32 -0500 | commented question | how to use rgbdslam Hi, Ken. After studies these days, I can finally get disparity showed by "image_view", while "stereo_image_proc"(1) sending the topics to it. "camera_calibration", "usb_cam" are also used. but you know my target is "rgbdslam"(2), would you hint me how to link 1 and 2 ? will "rviz" get involved? |
2014-04-24 04:14:13 -0500 | received badge | ● Popular Question (source) |
2014-04-23 19:16:17 -0500 | commented question | how to get catkin package recognized by rosrun Thanks, Ken. It works, you are great. I know cp and run node directly without rosrun can hardly be a good answer, but I just cannot find the correct one after reading tutorials and ppt courses by myself. |
2014-04-23 17:10:18 -0500 | asked a question | how to get catkin package recognized by rosrun I created a catkin workspace folder ~/catkin-ws, and downloaded usb_cam ( http://wiki.ros.org/usb_cam ) source into ~/catkin-ws/src. I called catkin_make and ~/catkin-ws/devel/lib/usb_cam/usb_cam_node is created. But when I type rosrun usb[TAB Key], nothing happened. I think this is because usb_cam not registered to ros system yet, since there is no usb_cam in /opt/ros/hydro/lib(I am using hydro). so I tried this: sudo cp -r ~/catkin-ws/devel/lib/* /opt/ros/hydro/lib/ still not working. how to get a catkin complied package recognized by rosrun? by the way, running "~/catkin-ws/devel/lib/usb_cam/usb_cam_node" directly does work. is that the correct way? |