ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

vdonkey's profile - activity

2017-01-14 12:47:48 -0500 received badge  Nice Question (source)
2015-08-20 11:34:22 -0500 received badge  Taxonomist
2015-05-11 22:59:18 -0500 received badge  Student (source)
2014-08-17 04:01:11 -0500 received badge  Famous Question (source)
2014-07-15 23:31:55 -0500 received badge  Famous Question (source)
2014-06-17 05:34:01 -0500 received badge  Notable Question (source)
2014-06-01 11:08:35 -0500 received badge  Famous Question (source)
2014-05-23 04:27:25 -0500 received badge  Notable Question (source)
2014-05-19 19:15:24 -0500 commented question Navigation + 3D SLAM + Kinect

as far as I know, hector_mapping needs no odometery info as input, and it publishes odom. while gmapping needs odom as input, you should get that from sth like a wheel encoder. but kinect seems not able to satisfy hector mapping because of short sight(laser scan sensor might be ok)

2014-05-14 08:23:53 -0500 received badge  Popular Question (source)
2014-05-13 23:34:22 -0500 commented question the goal is 2D navigation

I tried kinect + rgbdslam and got a relative fine result. although kinect is still expensive than stereo camera. I am studying http://code.google.com/p/ucsb-ros-pkg because their video seemed to be what I want. is this a good way worth to try?

2014-05-13 21:02:11 -0500 received badge  Supporter (source)
2014-05-13 19:47:00 -0500 asked a question the goal is 2D navigation

Sorry, Please let me tell a small story.

I am new to ROS, my final goal is "my robot can build a map automaticlly and walk to any demanded position". maybe in professional terms, what I want is SLAM and Navigation. a basic goal being achieved lots of times by most ROSers, isn't that true?

  • I tried stereo cameras + rgbdslam at first. but rbgdslam often stops, cannot build a 3D map smoothly.
  • I realized that maybe 2D SLAM is enough for me. so I tried kinect + gmapping. but I don't have a wheel encoder or other odometery device. the map generated is limited within a small fixed angle.
  • Someone told me to use hector_mapping in place of gmapping, which need no odometery input. I did it. but the result map jumps while kinect is turning around. I think the reason might be the laser scan data simulated by kinect is short sighted with noises.

I think now I have 4 ways to try:

  1. Use a real laser scan sensor, with hector_mapping. but laser sensor seems to be very expensive($500?)
  2. Can I use a ultrasonic sensor instead of a real laser scan sensor? someone said yes here(answer.ros), but I didn't get it.
  3. Use a wheel encoder, with gmapping. but how to drive the encoder and publish odom frame to /tf ? is there propery driver I can use(I searched but did not find), or must I write my own one?
  4. Is there anybody who can help me achieving my goal online via some tool like skype? I am losing confidence and patient on my own. I will pay for that.

If you are familiar with 3(wheel encoder solution), could you show me some tutorials about how to program a driver to satisfy gmapping's /tf topic? sample is best(I have read http://wiki.ros.org/navigation/Tutori... ).

Or, maybe I am walking on the wrong way from my objective. any hints are welcome. many many thanks. ROS is too difficult for me :(

2014-05-12 14:45:27 -0500 received badge  Notable Question (source)
2014-05-11 18:08:34 -0500 commented question rosmake rgbdslam

no idea. but maybe you can try catkin_make. copy rgbdslam_freiburg to your catkin home's src sub dir. run catkin_make at catkin home dir

2014-05-08 16:42:23 -0500 commented answer how gmapping get odom data

I hate tunning these parameters, too. but I just don't know how to provide data by /tf. I read the tutorials of tf(one turtle chasing another), and got some ideas that tf is used for converting x/y/z of one frame into another frame. but how to use it as a topic? please give me some help about tf.

2014-05-08 16:36:26 -0500 commented answer how gmapping get odom data

I tried hector_gmapping, but still get error: [ERROR] [1399602478.182731899]: Transform failed during publishing of map_odom transform: "base_link" passed to lookupTransform argument source_frame does not exist. after I set output_frame_id param of depthimage_to_laserscan to base_link

2014-05-08 16:34:13 -0500 received badge  Popular Question (source)
2014-05-08 15:51:03 -0500 commented answer how gmapping get odom data

@dornhege, Thank you very much. you showed me the way like the signpost at the crossroad, the lighthouse on the sea.

2014-05-07 22:51:24 -0500 asked a question how gmapping get odom data

In brief, I am going to use slam_gmapping with kinect. but the result is very bad. however I rotate, move the kinect, the result map in rviz just keep a fan-shaped area no more than 90 degree.

I am not even very clear about what question to ask, I feel not mastering the whole thing :(

so, maybe describe what I did in detail is a good idea. the question will be long, but easy to understand this way:

  1. in ubuntu open a terminal, type

    roscore
    
  2. start kinect driver in another terminal

    rosrun openni_camera openni_node
    
  3. convert depth image into LaserScan

    rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/depth/image_raw
    
  4. run gmapping

    rosrun gmapping slam_gmapping scan:=/scan
    
  5. run rviz to see result "Map", "LaserScan" and "Image"

    rosrun rviz rviz
    

I got warning message at step 2, I hope it doesn't matter because I can see a fairly good "Image" of "/depth/image_raw" topic

[ WARN] [1399535684.197109851]: ~device_id is not set! Using first device.
[ WARN] [1399535684.562974619]: Camera calibration file /home/vdonkey/.ros/camera_info/rgb_A00366913062050A.yaml not found.
[ WARN] [1399535684.563025039]: Using default parameters for RGB camera calibration.

I also got warning at step 4

[ WARN] [1399536805.888066710]: MessageFilter [target=odom ]: Dropped 100.00% of messages so far. Please turn the [ros.gmapping.message_notifier] rosconsole logger to DEBUG for more information.

this warning cannot be ignored, since I cannot see any map showed in rviz after adding a "Map" subscribing "/map" topic. so I add this before step 4

rosparam set /slam_gmapping/odom_frame camera_depth_frame

camera_depth_frame seems to be the default frame id of /scan topic published by depthimage_to_laserscan package

but after that, the warning message of gmapping changed into

[ WARN] [1399537552.284971741]: Failed to compute laser pose, aborting initialization ("base_link" passed to lookupTransform argument target_frame does not exist. )

so I added this before step 4

rosparam set /slam_gmapping/base_frame camera_depth_frame

this time I got no warning message, after setting base_frame and odom_frame, I have "Map", "LaserScan" showed in rviz.

The problem is, nevertheless I rotate/move the kinect, the result map is always facing a same direction. of course so does the laser scan lines. I have some guesses/questions:

  1. maybe I need a Wheel Encoder hardware?
  2. where to buy a proper wheel encoder? how to drive it?
  3. what topic will wheel encoder's driver publish? while gmapping only subscribe /tf and /scan
  4. topics subscribed can be regarded as input, topics published can be regarded as output, how about parameters? input? output? how about base_frame and odom_frame of gmapping ? I think they should affect "/map" topic, an output topic, so that they are some kind of output, but the two "rosparam set" before running gmapping give me hint that they are inputs.
  5. why I got no warning even I didn't do anything to the /tf topic? what should I do about /tf? why I always get "no tf data received" pdf after executing "rosrun tf view_frames" even when gmapping is running?

thank you for watching this long question. I hope I can understand the ... (more)

2014-05-06 19:19:54 -0500 received badge  Editor (source)
2014-05-06 19:16:03 -0500 answered a question rgbdslam fuerte blackscreen

Hello, Miss Meng. I was confused with the blackscreen, too, at the beginning. after a long time learning of the basic concepts of ROS(package, node, topic...etc), I got the idea that most package will subscribe some topics, and publish others, or show it on screen. I saw your screen capture, it seemed you have not opened other ubuntu terminal to run other package to provide topics needed to rgbdslam, I guess that is why you got blackscreen.

I don't know if you want to use stereo camera or kinect to be the image source, maybe it can give some ideas to see my question, I used stereo cameras, and I answered myself at last with all the command I typed.

http://answers.ros.org/question/15463...

2014-05-05 20:48:29 -0500 commented question Converting stereo_msgs/DisparityImage to sensor_msgs/Image

Hi, have you work it out? I have the same question

2014-05-05 20:48:29 -0500 received badge  Commentator
2014-05-04 21:55:26 -0500 commented answer cant locate the node rgbdslam

I know now that use catkin_make is a better idea

2014-05-04 20:24:51 -0500 commented answer how to use rgbdslam

a weak question: is "viso2_ros" a replacement of rgbdslam or stereo_image_proc? I run viso2_ros and got warning message "Visual Odometer got lost!" without any window showed. It needs stereo_image_proc but publish points2 as well. who will be responsible for showing the points for it? confused.

2014-05-04 16:58:38 -0500 answered a question how to use rgbdslam

A lot thanks to Ken, rock-ass.

Since I have succeeded in viewing 3D images in rgbdslam under your kindly help, I decide to show my commands here. I am not confident I am using ROS in the right way, but "comment" has words limit.

I am using a 3D camera with two "eyes" but only one USB line. 640x480 is the max resolution supported for single eye, but 640x480 with both eyes will not work because of USB bandwidth limit. so 352x288 is the max for stereo here.

All packages I used are: usb_cam, camera_calibration, camera_calibration_parsers, stereo_image_proc, image_view(optional), rgbdslam. usb_cam is the camera driver, and I used camera_calibration to calibrate my stereo cameras.

Open 1st terminal, run

roscore

Open 2nd terminal, run (drive the left eye)

rosparam set /stereo/left/video_device /dev/video2
rosparam set /stereo/left/image_width 352
rosparam set /stereo/left/image_height 288
rosparam set /stereo/left/pixel_format yuyv
rosparam set /stereo/left/camera_name left
rosrun usb_cam usb_cam_node /usb_cam:=/stereo/left/

Open 3rd terminal, run (drive the right eye)

rosparam set /stereo/right/video_device /dev/video1
rosparam set /stereo/right/image_width 352
rosparam set /stereo/right/image_height 288
rosparam set /stereo/right/pixel_format yuyv
rosparam set /stereo/right/camera_name right
rosrun usb_cam usb_cam_node /usb_cam:=/stereo/right/

Open 4th terminal, run (do calibration)

rosrun camera_calibration cameracalibrator.py --size 7x6 --square 0.049 left:=/stereo/left/image_raw right:=/stereo/right/image_raw left_camera:=/stereo/left right_camera:=/stereo/right --approximate=0.01 --no-service-check

notice that, before calibration, left and right camera will show warning messages saying cannot find some yaml files. that is ok at this point.

click "calibrate" button after it is enabled after taking 40 or more chessboard pictures. wait patiently. after quite a while, the calibration is done, check the epi value in top-right corner (let the camera see the chessboard again). if the epi is less than 2.5 it is acceptable, better value under 1.0. Then click "save". "commit" is a terrible button for me, because it will sometimes make my ubuntu reboot. just ignore the "commit" button.

"save" gives a /tmp/calibration.tar.gz(file name not sure). extract ost.txt out to /tmp/ost.ini, cut the ini file into two files manually half-half, for example /tmp/ost_left.ini and /tmp/ost_right.ini, then convert the ini files into yamls:

rosrun camera_calibration_parsers convert /tmp/ost_left.ini ~/.ros/camera_info/left.yaml
rosrun camera_calibration_parsers convert /tmp/ost_right.ini ~/.ros/camera_info/right.yaml

restart 2nd and 3rd terminal's command(ctrl+c and run again), this time there should be no warning messages

Open 5th terminal, run (mainly generate point cloud and disparity)

rosparam set /stereo/stereo_image_proc/approximate_sync true
ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_image_proc

the calibration must be done, or, stereo_image_proc will complain camera not calibrated

Open 6th terminal, run (showing left, right and disparity runtime images)

rosrun image_view stereo_view stereo:=stereo image:=image_rect _approximate_sync:=true

here is the chance to see disparity image, if the disparity is not good enough,it is a good idea ... (more)

2014-04-29 19:25:13 -0500 commented question how to use rgbdslam

but unfortunately, centre image, the disparity image, has a strange vertical separate line on it. hard to describe. and the slam processing often stops, I don't know if the stop is relative to the separate line.

2014-04-29 19:22:29 -0500 commented question how to use rgbdslam

Thanks Ken. I did see the clouds in rviz. I also succeeded to see the similar thing in rgbdslam. after I set wide_topic and wide_cloud_topic which were defaultly "/camera/rgb/image_rect_color" and "/camera/depth_registered/image_rect"

2014-04-28 01:27:53 -0500 commented question Translating coordinate frames? Or RGBDSLAM axis orientation?

Hello, Jon Stephan, I'm a beginner. I want to use stereo_image_proc(call it SIP) with rgbdslam too. I can see disparity from image_view receiving topics from SIP. what should be next? rgbdslam seems listening /camera/rgb/image_rect_color & /camera/depth_registered/image_rect, but SIP don't have them

2014-04-28 00:38:32 -0500 commented question how to use rgbdslam

Hi, Ken. After studies these days, I can finally get disparity showed by "image_view", while "stereo_image_proc"(1) sending the topics to it. "camera_calibration", "usb_cam" are also used. but you know my target is "rgbdslam"(2), would you hint me how to link 1 and 2 ? will "rviz" get involved?

2014-04-24 04:14:13 -0500 received badge  Popular Question (source)
2014-04-23 19:16:17 -0500 commented question how to get catkin package recognized by rosrun

Thanks, Ken. It works, you are great. I know cp and run node directly without rosrun can hardly be a good answer, but I just cannot find the correct one after reading tutorials and ppt courses by myself.

2014-04-23 17:10:18 -0500 asked a question how to get catkin package recognized by rosrun

I created a catkin workspace folder ~/catkin-ws, and downloaded usb_cam ( http://wiki.ros.org/usb_cam ) source into ~/catkin-ws/src. I called catkin_make and ~/catkin-ws/devel/lib/usb_cam/usb_cam_node is created.

But when I type rosrun usb[TAB Key], nothing happened.

I think this is because usb_cam not registered to ros system yet, since there is no usb_cam in /opt/ros/hydro/lib(I am using hydro). so I tried this:

sudo cp -r ~/catkin-ws/devel/lib/* /opt/ros/hydro/lib/

still not working. how to get a catkin complied package recognized by rosrun?

by the way, running "~/catkin-ws/devel/lib/usb_cam/usb_cam_node" directly does work. is that the correct way?