ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2016-07-19 16:48:36 -0500 | received badge | ● Good Answer (source) |
2015-08-29 01:37:05 -0500 | received badge | ● Good Answer (source) |
2015-08-26 00:52:52 -0500 | received badge | ● Guru (source) |
2015-08-26 00:52:52 -0500 | received badge | ● Great Answer (source) |
2015-04-02 14:10:57 -0500 | received badge | ● Nice Answer (source) |
2015-04-02 09:48:41 -0500 | answered a question | How to interpret and use the values of the message type 'sensor_msgs/LaserScan'? In C++, variable-length ROS message types are implemented as std::vector (source). So you should be able to do or Disclaimer: I haven't actually tried running this exact code. |
2015-03-27 13:41:38 -0500 | answered a question | Counting amount of people in HOG/HAAR detection Hi Phelipe! To count the number of detections from the HOG detector use: To count the number of detections from the HAAR detector use: An alternative for the kinect which I would recommended trying out is the ground-based RGBD detection in the PCL by Matteo Munaro |
2015-03-10 21:59:53 -0500 | answered a question | Gmapping is adding extra points to my map that aren't from the laser scan. How are you filtering out the 0.0m measurements? Maybe you or gmapping is setting those points to scan.range_max + 1 as in the laser scan range filter? Can you see any other conspicuous ranges, such as NaNs, infs, or -infs in the laser scan? |
2015-03-09 11:20:18 -0500 | commented answer | Person/people detection package for 2D-based camera It can detect both. Give it a shot but don't expect very robust performance from it unless you combine it with a tracking-by-detection method (e.g., a Kalman filter with outlier rejection), geometric constraints (e.g., filtering based on size) or background subtraction. |
2015-02-16 17:58:33 -0500 | commented question | bfl indigo need to navigation-indigo-devel package Try: |
2015-02-05 01:43:01 -0500 | received badge | ● Enlightened (source) |
2015-02-05 01:43:01 -0500 | received badge | ● Good Answer (source) |
2015-01-02 15:41:37 -0500 | commented answer | Why does SICK LMS200 not work with angle = 180 and resolution = 0.25 Yup, see Table 5-1 on page 11 of the manual. Its max scanning angle is limited to 100 degrees when using an angular resolution of 0.25 degrees. |
2014-12-04 06:14:14 -0500 | received badge | ● Nice Answer (source) |
2014-11-24 11:57:54 -0500 | commented question | how to use laser scan matcher? What happens when you try the commands here? |
2014-11-19 09:07:01 -0500 | commented question | Errors at catkin_make for laser_scan_matcher Awesome! I'll just reply with an answer so we can mark as solved. |
2014-11-19 09:06:27 -0500 | answered a question | Errors at catkin_make for laser_scan_matcher As per the comments, the demo seems to work on Inidigo with these commands: |
2014-11-18 20:05:08 -0500 | commented question | Errors at catkin_make for laser_scan_matcher I just got the demo working on my machine with:
Is this what you did? |
2014-07-23 17:44:33 -0500 | commented answer | Changing parameters for SICK laser on Pioneer robots I don't believe it supports that resolution/scan angle combination. See page 11: http://sicktoolbox.sourceforge.net/docs/sick-lms-technical-description.pdf |
2014-07-17 09:06:24 -0500 | commented question | Linking errors after updating to Hydro I fixed my "undefined reference to tf::Transformer..." by including "tf" in the "find_package" part of the CMakeLists. |
2014-07-15 19:21:23 -0500 | received badge | ● Organizer (source) |
2014-07-15 16:30:35 -0500 | answered a question | transformLaserScanToPointCloud usage problem. It appears that in the transformLaserScanToPointCloud function a lookupTransform is called at time: scan_in.header.stamp + ros::Duration ().fromSec (scan_in.ranges.size () * scan_in.time_increment) My quick-fix solution was to call waitForTransform and include that added time before calling transformLaserScanToPointCloud. |
2014-06-03 09:21:06 -0500 | commented question | Is Psi Pose necessary for openni tracker to track? And I think this is the paper's ROS webpage: http://wiki.ros.org/cyphy_people_mapping |
2014-06-03 09:19:30 -0500 | commented question | Is Psi Pose necessary for openni tracker to track? "Although early versions of the OpenNI skeleton tracker required users to surrender to the kinect in order for tracking to begin, recent versions allow the saving and loading of user calibration files." |
2014-06-03 09:19:25 -0500 | commented question | Is Psi Pose necessary for openni tracker to track? Slight lead that may or may not help you out: in the paper "STALKERBOT: Learning to Navigate Dynamic Human Environments by Following People" they say: |
2014-05-30 06:57:05 -0500 | edited answer | Person/people detection package for 2D-based camera You might want to check out HOG detector for OpenCV which tries to detect standing pedestrians, although you're going to get a lot of false positives and missed detections from it. Also, it seems they only have documentation on the website for the GPU implementation (i.e. using CUDA) and you'll have to look elsewhere for examples of the CPU implementation. Another possibility to check out is the OpenCV face detector. Some ROS packages that use this are face_detector (uses stereo images though) and pi_face_tracker (for Groovy though). In general, detecting people from a moving robot with a monocular camera is still a very difficult and open problem. If your camera is stationary then the problem can be simplified through background subtraction but I don't have any references for that. For help with using OpenCV in ROS, checkout the vision_opencv package. Edit: I uploaded my implementation of both of these in a ROS package on Github: https://github.com/angusleigh/hog_haa... |
2014-05-10 13:03:25 -0500 | received badge | ● Good Answer (source) |
2014-05-07 03:46:34 -0500 | received badge | ● Nice Answer (source) |
2014-05-05 06:33:25 -0500 | answered a question | Segmentation fault using sensor_msgs/LaserScan Message Your main "while" loop will start running immediately after launching the node, but the values for "r_laser" will not be set until the callback is made, so when you try to access the first element, you get a segfault. Maybe try changing the code in your main loop to |
2014-04-25 02:22:18 -0500 | answered a question | Why does rviz seg faults when laserscan is added? This is common bug with Rviz 1.9.34 in Groovy for non-Nvidia video cards. If you do a sudo apt-get update does it update to Rviz 1.9.35 for you? I would recommend that, switching to Hydro, or compiling one of the other versions of Rviz from source (maybe 1.9.33 or 1.9.35). |
2014-04-24 06:24:11 -0500 | commented question | Why does rviz seg faults when laserscan is added? Which ROS version? Which Rviz version (you can check by $rosrun rviz rviz)? |
2014-04-20 14:15:55 -0500 | marked best answer | Training data for leg_detector? Looking at the outputs from the leg_detector package, I'm wondering if it could benefit from a more diverse set of training examples. Notably, the detection confidence falls quickly as people move away from the laser and it is fooled easily by round objects like chair legs. Does anyone know if the original training data is available anywhere? Maybe also the labelling/annotation method that was used? Relevant links: |
2014-04-20 14:09:12 -0500 | marked best answer | ros-hydro-desktop-full installs gazebo-current instead of gazebo According to the gazebosim wiki, version 1.9.x of gazebo should be installed when ros hydro is installed but for reasons unknown I'm getting gazebo-current (i.e. version 2.x.y) installed. Any idea why? I'm entering: and the output is: I've also had gazebo-current install earlier but have since removed it. |
2014-04-20 12:52:44 -0500 | marked best answer | How to insert breakpoints and step through a Gazebo plugin? Hello, Does anyone know if there's any way to insert breakpoints and step through a Gazebo plugin? This would be very useful to help with debugging logical errors. I've tried inserting breakpoints in the eclipse editor, compiling and running the launch file but it doesn't seem to work. As such, I've just resorted to printf([var])'s for now. Thanks very much! Angus |
2014-04-20 12:47:58 -0500 | marked best answer | Fuerte - no transformations published when using spawn_model Hello fellow ROSers, I've recently tried upgrading from Electric to Fuete but I'm getting errors relating to publishing of the transforms after I spawn a model. I started out with the simple spawning of a block in this tutorial, and while the block spawns in Gazebo, no transforms are published. Here's the output I receive: In first terminal: In second terminal: As I still have Electric installed, I have tried running the same code but with my ROS paths pointing to the Electric installation, and everything ran smoothly with no errors or warnings, and all transforms published. Any ideas? Thanks for you help!
EDIT/UPDATE: Opps, as hsu reminded me - there aren't supposed to be any transforms published when simply generating Block.urdf ... (more) |