ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

evanmj's profile - activity

2019-03-07 06:44:54 -0500 received badge  Favorite Question (source)
2015-06-04 14:45:32 -0500 answered a question how to connect to Servo Drive(googoltech CDHD ServoDrive ethercat) using ros_Ethercat and ros_control

I think what you are looking for is http://wiki.ros.org/soem

" SOEM is an open source EtherCAT master library written in c"

It supports SoE and CoE based drives according to the feature list (as well as many other EtherCAT features), however I've not personally tried it on anything.

2013-05-31 09:12:23 -0500 received badge  Stellar Question (source)
2012-04-30 06:16:55 -0500 marked best answer Voice commands / speech to and from robot?

I have used the sound_play package with festival to synthesize voices to make my robot "talk", but I would also like to be able to command the robot by voice.

Basically, I feel like someone has used a tool like CMU Sphinx with ROS, but I am unable to find any examples.

2012-04-07 20:42:00 -0500 received badge  Famous Question (source)
2012-02-24 07:22:18 -0500 answered a question ros installation on panda board

I would suggest you separate the motor control, encoders, and accelerometer (and various other sensors) out into a separate microcontroller. As mentioned above, the Linux kernel will not handle realtime very well by default. I suggest using an arduino and the avr_bridge package to get information to and from ros. There should be examples of using an arduino as a motor control interface with ros odometry, and similarly plenty of arduino based balancing robots. Between the two examples, you could manage a balancing robot with ros odometry.

The segway route is the "easy way out" if you want to utilize the higher level ros software such as mapping, navigation, etc without messing with the PID and feedback loops of doing a balancing robot implementation yourself... the problem is that the segway bases are quite expensive.

You should be able to mesh multiple arduino examples together to make a balancing robot, then add ros control via the pandaboard.

However you approach it, I would suggest for the ROS portion that you isolate the balancing section of the robot into its own function and make the robot work with ROS with standard motor control (using outriggers to keep it balanced). Treat them as separate projects until you get far enough along in both to meld them together.

2011-12-09 07:34:19 -0500 received badge  Taxonomist
2011-11-27 07:04:04 -0500 received badge  Famous Question (source)
2011-07-07 03:36:22 -0500 received badge  Great Question (source)
2011-06-20 20:39:51 -0500 received badge  Notable Question (source)
2011-05-04 01:13:29 -0500 received badge  Notable Question (source)
2011-04-02 03:39:28 -0500 received badge  Popular Question (source)
2011-03-25 09:19:33 -0500 received badge  Popular Question (source)
2011-03-22 02:44:42 -0500 received badge  Favorite Question (source)
2011-03-12 12:59:45 -0500 received badge  Good Question (source)
2011-03-10 15:09:49 -0500 marked best answer Navigation planning based on kinect data in 2.5D?

The navigation stack will support most of your use case out of the box -- because you have a true long-range laser for localization (which that other problem you reference did not). You might note that this is nearly identical forms of input as the PR2 uses, which has a Hokuyo laser on the base for localization and obstacle avoidance, and a stereo camera rig for additional obstacle avoidance in 3d.

When configuring your robot's launch and parameter files, use only the SICK laser as input to AMCL for localization. Then use both the SICK and the Kinect data as observation sources for the local costmap (see http://www.ros.org/wiki/costmap_2d for details on parameters).

It might also be advisable to setup a voxel_grid to downsample and clean your Kinect data before sending it into the costmap_2d. The pcl_ros contains a nodelet that can do such, so you could configure it entirely in a launch file without any new custom code (see http://www.ros.org/wiki/pcl_ros/Tutorials/VoxelGrid%20filtering for details)

2011-03-06 06:16:12 -0500 received badge  Nice Question (source)
2011-02-27 06:23:14 -0500 received badge  Editor (source)
2011-02-27 06:18:13 -0500 commented answer Navigation stack in 3D
My response to this question turned into a question of its own instead. Maybe it will help in the short term: http://answers.ros.org/question/189/navigation-planning-based-on-kinect-data-in-25d
2011-02-27 06:17:35 -0500 asked a question Navigation planning based on kinect data in 2.5D?

I have a wheeled robot with a front mounted SICK laser scanner.

The recent addition of a kinect allows me to use the SICK laser for longer range nav planning and mapping, and the kinect for various 3d stuff.

So, my idea is to convert the kinect point cloud to laser scans at various heights (via pcl). Say for instance my SICK laser is at 8" above the ground. That will not tell me about a curb or some other obstacle that lies just under 8". So, if I were to map the appropriate Z value of the kinect's point cloud data to a new laser scan topic, I could then use it for navigation, and write some code to decide what to do at that Z level. A simple example would be to determine the height of an object that my robot could negotiate based on wheel size, and just slow it down to the appropriate speed. It could also check for height clearance when driving around by creating a laser scan that correlates to the highest Z value of the robot.

I see this being useful for quad copters... it is still not full 3D navigation, but it could allow for some decent object avoidance in the Z dimension by writing some code to determine which Z height has the most clear path.

My question is, is anyone using laser scans at various heights to evaluate navigation at different Z levels? Is the kinect2laser a viable solution, or is there a better way to do this?

I see this as a possible workaround for this problem.

2011-02-24 14:08:56 -0500 received badge  Great Question (source)
2011-02-22 19:16:31 -0500 received badge  Good Question (source)
2011-02-21 15:13:32 -0500 marked best answer Voice commands / speech to and from robot?

It's quite experimental and definitely not documented, but we have been using PocketSphinx to do speech recognition with ROS. See the cwru_voice package for source.

If you run the voice.launch file (after changing some of the hardcoded model paths appropriately in whichever node it launches), you should be able to get certain keywords out on the "chatter" topic. As an example, voice.launch should recognize a command to "Open the Door" or "Go to the hallway" and output a keyword on the chatter topic. If you do try it out and have problems, let me know as you would be the first outside our lab to try it that I know of.

Stanford also has a speech package in their repository. EDIT: Thanks to @fergs for finding the Stanford package.

UPDATE: Make sure to take a look at Scott's answer below for a nice tutorial and demo code for getting speech recognition up and running for your own uses.