ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Rydel's profile - activity

2022-03-10 15:49:29 -0500 received badge  Nice Question (source)
2018-03-21 13:26:56 -0500 received badge  Good Question (source)
2017-02-05 03:04:29 -0500 received badge  Favorite Question (source)
2016-03-02 12:10:58 -0500 received badge  Nice Answer (source)
2016-02-05 09:51:49 -0500 received badge  Nice Question (source)
2014-04-20 12:55:52 -0500 marked best answer modifying a known map gmapping

I was wondering if it's possible to create a partial map using the navigation stack and then reloading the map later to complete it. Correct me if I am wrong but it seems you can load previously known maps to the amcl demo but that program does not build on the map, trying the same command to the gmapping demo doesnt seem to work:

$ roslaunch turtlebot_navigation gmapping_demo.launch map_file:=/tmp/my_map.yaml

How does one complete a partial map? Also is it possible to edit the map.pgm file in something like gpaint to correct small errors?

2014-04-20 12:53:37 -0500 marked best answer turtlebot navigation almost working

Im using the turtlebot_navigation gmapping_demo.launch along with the turtlebot_teleop turtlebot_teleop_key. When running the gmapping_demo the first section of the map that it produces is always 100% correct, however when i start driving the turtlebot around and watching the map being built in Rviz the turtle bot moves a lot farther in real life than it thinks it does in rviz basically making it so the laserscan is looking at an adjacent wall in real life while in rviz the robot has only rotated like 5 degrees causing overlap in the map an making them always look cone shaped. The robot in Rviz also jumps around randomly sometimes.

The fixed frame in Rviz is set to /map and the transform tree all checks out. I've tried calibrating the bot and the values to multiply by were only like 1.004 and when I changed them using the dynamic method it didn't seem to help, if anything made it worse. The calibration process also seemed a little strange, the robot turned 360 degrees twice at a slow speed overshooting by about 20 degrees, it readjusted itself to the wall and then 360'd again at a slightly faster speed again overshooting, the third time it undershot and the last two times it was perfect. I am using the minimal.launch for the turtlebot_bringup. And while dynamically re configuring (gyro_measurement_range) and (odom_angular_scale_correction) the turtlebot terminal only reports the (gyro_measurement_range) and (gyro_scale_correction) as changing. Am I missing something here? Is there a set speed that I should adjust the teleop to? The community has been very helpful so far, thanks.

2014-04-20 12:53:34 -0500 marked best answer Precise openni kinect rviz

Hi guys, I'm having trouble with viewing images on rviz using the openni_launch stack. I recently updated my machine to Ubuntu 12.04 and to ROS fuerte. After the update I cannot seem to view any images. The openni.launch file has a few major errors along with a lot of exceptions, and I don't understand what it is saying. Here is the output:

[ERROR] [1337870775.117975609]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressed/set_parameters]
[ERROR] [1337870775.124459412]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/theora/set_parameters]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[camera/points_xyzrgb_depth_rgb-15]: started with pid [6752]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[camera/disparity_depth-16]: started with pid [6802]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[camera/disparity_depth_registered-17]: started with pid [6852]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[camera_base_link-18]: started with pid [6872]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[camera_base_link1-19]: started with pid [6895]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[camera_base_link2-20]: started with pid [6920]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[camera_base_link3-21]: started with pid [6942]

And then finally, when I try to view an image in rviz following the guide here: link text Rviz says no messages received for the topic in the GUI and in the terminal sometimes it says dropped 100.00% of messages so far sometimes it doesn't.

On maverick and electric I successfully had rgb-d slam up and running.

Finally, I am new to ROS and have finished the main tutorial but am still wondering if there is any other useful guide out there to learn a bit more, especially about what is the best stack for controlling the iRobot create or what is the best place to get started writing ones own packages that interact with other open source packages. It seems very overwhelming when most packages have 1000s of lines of code. I am running on a bilibot by the way which is basically just a poorly documented modified turtlebot with no community.

I am also following this thread link text But our problems seem to be slightly different.

2014-04-20 12:53:14 -0500 marked best answer /odom to /base_footprint

My question: Hey everyone, What is the best way to connect odom to base_link? Should I modify the navigation stack to look for odom_combined instead? Or would that be too difficult?

Explanation/background: I'm using the iRobot create and the package create_node. The package works great and I can send cmd_vel topics via the keyboard and it responds, however in rxgraph i can see the create_node is publishing /odom but it is not being sent to tf and therefore it is not the parent of base_link. When I view the tf tree odom_combined is the parent of base_link and odom is not connected, but none of the navigation stacks care about odom_combined, they only want odom. Running on a modified turtlebot.

2014-04-20 06:51:32 -0500 marked best answer ROS C++ custom class compiler and linker

Hi, for some reason I cannot figure out how to get my c++ ros node to compile correctly with 2 source files and a header file. One source file is simply the main, and the other 2 files are a custom class, the header holding the function declarations and the .cpp containing the actual definitions (body)... pretty standard layout, any-who... where should I put the class .h and .cpp files to compile and link correctly and do I have to do anything special with CMakeLists?

Thanks!

2014-04-20 06:51:27 -0500 marked best answer turtlebot robot_pose_ekf/odom? or odom?

Hi, I was wondering what is more accurate for turtlebot odometry the /robot_pose_ekf/odom or just /odom.

If its robot_pose_ekf/odom, why would the turtlebot gmapping and amcl demos have the move_base node subscribing to /odom instead of robot_pose_ekf/odom

2014-04-20 06:51:21 -0500 marked best answer turtlebot sensors

How can I turn off the cliff sensor? or deactivate it for a certain amount of time. I'm driving the robot into an elevator but it can't quite make it in because of the small gap down the shaft... my cmd_vel commands keep getting ignored because the cliff sensors activate.

2014-04-20 06:51:18 -0500 marked best answer int callback

Is it possible to have a callback function other than type void? If yes, how would you handle a situation where you call ros::spinOnce() and say you are subscribing to two different topics each with their own callback function... how would you store the variable that one of the callback functions returned?

2014-04-20 06:51:16 -0500 marked best answer turning turtlebot a set number of degrees

I'm looking for a way to turn my robot a certain angle in a cpp node. I have read through these questions:

http://answers.ros.org/question/12557/how-to-make-the-turtlebot-rotate-in-place-a-set-of/

The above made sense, so then I looked for a way to acquire the heading of the robot,

http://answers.ros.org/question/30926/getting-turtlebot-heading/

The user in the above question can already get their robot's quaternion data and I cannot making the answer not very helpful. Could someone explain an efficient way to acquire the robot's quaternion data and explain how the answer in the 2nd link works?

Thanks!

2014-04-20 06:51:15 -0500 marked best answer kinect /scan data format

just a quick question, does the scan (http://www.ros.org/doc/api/sensor_msgs/html/msg/LaserScan.html) topic's ranges array hold 360 values each representing a degree in a circle for the microsoft kinect?

For example, I've realized that index 180 (or 179?) is directly in front of the camera, does that make the next index 1 degree to the left/right?

One more explanation for clarity, sorry i'm having trouble explaining this in a manner i feel is clear, the kinect is vertex A, do depth scan ranges[180] and ranges[181] make a 1 degree angle at vertex A?

Thanks for any help!

2014-04-20 06:51:10 -0500 marked best answer ROS offline

Is there any way to run a ROS "network" offline? For example, on a computer without an internet connection how can one start a roscore and other nodes that only communicate with each other on that machine?

2014-04-20 06:51:01 -0500 marked best answer programming with kinect rgb

I'm writing a program which requires an rgb image from the kinect, ideally I want something like the horizontal slice like the /scan topic only with RGB points instead of depth points but it's not necessary.

What would be the best topic to subscribe to? camera/rgb/image_color? camera/rgb/image_raw? or something else?

Lastly, has anyone had any experience with organizing the data in the above topics? I was surprised when I saw the data in rostopic being published as a 1-dimensional array instead of a 3-dimensional... as I said before I simply would like to extract the RGB values for a horizontal slice in the middle of the image.

Any help or advice appreciated!

2014-04-20 06:50:57 -0500 marked best answer subscribing and publishing

I have an application where 2 nodes are talking to each other over a certain topic, based on where the nodes are at in the program flow they either don't care about the topic or they do. My question is something that the tutorials didn't help with, is there a command that can check a topic on demand? or check a specific piece of a topic on demand?

elevator_talk.publish(status);

works great for publishing on demand, is there something similar for subscribing?

2014-04-20 06:50:57 -0500 marked best answer Accessing Turtlebot iRobot Create Sensors

I noticed that when navigating with the gmapping demo or amcl demo that the turtlebot stops when the wheel drop sensors or bump sensors are active, just wondering if anyone knows what node takes care of this (move_base?) and what source file it can be found in.