ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Bart's profile - activity

2021-08-09 13:04:16 -0500 received badge  Great Answer (source)
2019-08-30 06:30:25 -0500 received badge  Guru (source)
2019-08-30 06:30:25 -0500 received badge  Great Answer (source)
2016-06-20 18:01:20 -0500 received badge  Nice Question (source)
2016-06-20 18:01:18 -0500 received badge  Nice Answer (source)
2016-01-31 20:04:24 -0500 received badge  Great Answer (source)
2016-01-31 20:04:24 -0500 received badge  Guru (source)
2015-10-20 05:56:20 -0500 received badge  Nice Answer (source)
2014-05-11 08:55:29 -0500 received badge  Self-Learner (source)
2014-01-28 17:22:09 -0500 marked best answer How can ROS communicate with my microcontroller?

Some recent questions have shown an interest in how ROS can be used to run a motor or read a sensor on a custom robot. Many hobby robots have one or more microcontrollers that interface with the robot hardware. These microcontrollers should be able to communicate using a serial connection to a robot mounted laptop running ROS.

The avr_bridge is a sophisticated approach, but it requires a larger AVR processor and C++ code development on the microcontroller which may not be available in some circumstances. Is there a simpler alternative?

2014-01-28 17:22:05 -0500 marked best answer How to configure Eclipse to properly load a plugin?

I am trying to use Eclipse to debug the navigation stack (or debug how I am mis-using the stack). I have created an Eclipse project for move_base (make eclipse-project in move_base package), but am unsure how to deal with the plugins that need to be loaded. During initialization of the MoveBase class I get the following error:

[rospack] couldn't find package [nav_core]
terminate called after throwing an instance of 'pluginlib::LibraryLoadException'
  what():  rospack could not find the nav_core package containing nav_core::BaseGlobalPlanner

I appreciate that many ROS developers use command line tools, but I have enjoyed using Eclipse on simpler projects. Is there an Eclipse project configuration step that I am likely missing to allow loading/debugging of plugins?

2014-01-28 17:21:58 -0500 marked best answer roscpp message callback threading?

If a simple node with ros::spin() in the main loop subscribes to two messages, will the message callbacks be called sequentially or be potentially processed in parallel?

Is the GlobalCallbackQueue global to all nodes/ROS, or just the node making the call?

I reviewed roscpp/Overview/Callbacks, and it indicates that there is a callback queue(s), but it doesn't confirm that one callback completes before the next callback is initiated.

2014-01-28 17:21:58 -0500 marked best answer Posting code in answers and copyright statement?

Is it appropriate to post a program listing as an answer to a question? (there is a "preformatted text" tool to assist with this)

When posting code for ROS here or elsewhere, should the Willow Garage copyright/limitations notice always be included?

(I have some pointcloud to laserscan code I can share)

2014-01-28 17:21:58 -0500 marked best answer Pointcloud_to_laserscan ranges angular min and max?

The pointcloud_to_laserscan package in the turtlebot stack provides the cloud_to_scan.cpp nodelet. This program defines the laserscan angular field of view from -90 degrees to +90 degrees (180 total), with a 0.5 degree angular spacing between beams. The Kinect has a field of view of only 55 to 57 degrees. The extra range values on either side of the sensor_msgs::LaserScan message are padded with max_range+1. Is there a technical reason for the extra range values being passed around?

In the amcl_turtlebot launch file, laser_max_beams is defined as value="30". How does this relate to the 110 range readings in the Kinect laserscan message?

2014-01-28 17:21:58 -0500 marked best answer How to create a 180 degree laserscan from a panning Kinect?

The Kinect has a narrow (57 degree) field of view, which is limited for obstacle avoidance, navigation and map making as pointed out elsewhere by others. My Kinect is mounted on a pan/tilt mechanism so I should be able to pan the laser and create a wider simulated laserscan. (but with an admittedly lower scan rate than a real laser) An advantage however is that the Kinect should be able to identify obstacles the full height of the robot, rather than just at the laser elevation. Does anyone have experience or example code available for this task?

2014-01-28 17:21:55 -0500 marked best answer pointcloud to laserscan with transform?

Has anyone written a nodelet that can be applied between the cloud_throttle nodelet and the cloud_to_scan nodelet in the pointcloud_to_laserscan package in the turtlebot stack? This nodelet would be used to provide a horizontal laserscan relative to the base_footprint frame when the Kinect is tilted downwards to provide a better view of the ground area just in front of the robot. (or is there a better way to accomplish the same objective?)

2014-01-28 17:21:47 -0500 marked best answer Timer callbacks not working

Previously working timers in some of my packages/nodes have stopped working. The roscpp_tutorials/Tutorials/Timers example doesn't work either as the timer callbacks never execute. The program aborts normally with ctrl-C from the spin function.

I am using Ubuntu 10.04, Diamondback and an Acer AX3400 (AMD Athlon II). Internet searches haven't uncovered any other examples of this problem. Hints?

2013-02-23 08:23:22 -0500 received badge  Notable Question (source)
2013-02-23 08:23:22 -0500 received badge  Famous Question (source)
2013-02-23 08:23:22 -0500 received badge  Popular Question (source)
2012-11-21 18:04:27 -0500 received badge  Nice Question (source)
2012-10-29 23:35:16 -0500 received badge  Famous Question (source)