ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2015-12-22 04:21:11 -0500 | received badge | ● Nice Answer (source) |
2014-08-08 20:15:49 -0500 | received badge | ● Famous Question (source) |
2014-05-20 13:35:25 -0500 | answered a question | Multiple turtlebots "odom" topic problem Hi... I had the same problem, and I found a solution (at least for my case). I will try to explain what I do, in case of someone is interested in try it out (this will be a long answer!).. First, i will you you the launch files that i use, and then some of the nodes I write in order to make it work... The launch files are: This is the auxiliar launch file (called above, inside a namespace). And this launch file is for running the navigation stack for both robots: You can see in this launch files that I use a node named "odom_sim". As some people have noticed, namespaces in the kobuki gazebo plugin doesn't work for many topics (being odometry one of them), so I program this node (odom_sim) in order to publish ... (more) |
2014-04-23 15:22:48 -0500 | received badge | ● Notable Question (source) |
2014-04-23 12:56:41 -0500 | answered a question | Face Recognition with kinect Hello. 1) In order to do that, first check the "openni" tutorial. This tutorial demonstrates how to open a Kinect in ROS, introduces ROS tools for visualization and configuration, etc: http://wiki.ros.org/openni_launch/Tutorials/QuickStart 2) Then, also check how to use a bridge between the kinect image topic and Opencv. This tutorial describes how to interface ROS and OpenCV by converting ROS images into OpenCV images, and vice versa, using cv_bridge. Included is a sample node that can be used as a template for your own node: http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython 3) Read the OpenCV documentation on how to recognize faces: http://docs.opencv.org/trunk/modules/contrib/doc/facerec/index.html http://docs.opencv.org/trunk/modules/contrib/doc/facerec/tutorial/facerec_video_recognition.html 3) Try some example that you can found on the web, like this (I haven't tried it): https://code.google.com/p/ros-by-example/source/browse/trunk/rbx_vol_1/rbx1_vision/nodes/face_detector.py?r=116 I hope this links can help you, good luck! |
2014-04-11 07:07:17 -0500 | received badge | ● Scholar (source) |
2014-04-11 06:05:36 -0500 | received badge | ● Popular Question (source) |
2014-04-10 23:01:51 -0500 | received badge | ● Teacher (source) |
2014-04-10 14:52:58 -0500 | answered a question | Turtlebot 3dSensor.Launch cannot find kinect as if it is not plugged in? Hi. I had a similar problem after updating Ubuntu (13.04) in my PC. I'm not sure what happened, but I had to reinstall the kinect driver: I download this: https://github.com/avin2/SensorKinect And in the "Bin" folder, I extract "SensorKinect093-Bin-Linux-x64-v5.1.2.1" and install it (by running the install.sh) I hope that could work for you as well! |
2014-04-10 14:32:04 -0500 | commented answer | Gazebo simulation time speed up. mmm... I will try them... Thank you! |
2014-04-10 14:26:51 -0500 | received badge | ● Editor (source) |
2014-04-10 14:20:48 -0500 | answered a question | using move_base recovery for initial pose estimation Hi What robot are you using? In my case, I'm working with Turtlebot II robots. I avoid using the rviz-button on my demos by using first the autocking sequence. Turtlebot have a docking (battery recharger) with infra-red emitters and the robot base have IR-receptors ( http://wiki.ros.org/kobuki/Tutorials/... ). You can make the robot to search and find this base automatically. I take advantage of this by using the dock position as reference (putting it in an known place on the map) When the robot reach it, it publish a message to the topic 'initialpose' (PoseWithCovarianceStamped) with the reference position and AMCL now can work normally. Maybe this is not precisely what you are searching to do, but what I recommed is that you search for a way in which therobot reaches a known position looking for some kind of landmark first, and use that reference to give the AMCL algorithm the initial position. I think that without some help (initial position), the AMCL probably will not work. |
2014-04-10 11:09:57 -0500 | received badge | ● Supporter (source) |
2014-04-10 11:04:09 -0500 | answered a question | Problem with multiple navigation on Gazebo Hi everyone... I had the same problem, and I found a solution (at least for my case). I will try to explain what I do, in case of someone is interested in try it out (this will be a long answer!).. First, i will you you the launch files that i use, and then some of the nodes I write in order to make it work... The launch files are: This is the auxiliar launch file (called above, inside a namespace). And this launch file is for running the navigation stack for both robots: You can see in this launch files that I use a node named "odom_sim". As some people have noticed, namespaces in the kobuki gazebo plugin doesn't work for many topics (being odometry one of them), so I program this node (odom_sim) in order to ... (more) |
2014-04-10 10:35:40 -0500 | received badge | ● Student (source) |
2014-04-10 10:21:34 -0500 | asked a question | Gazebo simulation time speed up. Hi everyone. I have been trying to simulate two Turtlebot II on Gazebo. I wonder if it is possible to perform simulations with time increased several times (pun). I'm not really concerned in visually observe the movements, but I want to run several times the same simulation and store different measurements / data for statistical analysis of the project I'm working on. Is this kind of high-speed simulation possible in Gazebo? |
2014-02-27 09:39:56 -0500 | received badge | ● Enthusiast |
2014-02-18 10:54:40 -0500 | commented answer | Problem with multiple navigation on Gazebo Hello. I have the same problems, and I don't find the solution either... Maybe we can help each other... |