ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Laurie's profile - activity

2013-04-21 21:46:30 -0500 received badge  Famous Question (source)
2013-04-21 21:46:30 -0500 received badge  Notable Question (source)
2013-04-21 21:46:30 -0500 received badge  Popular Question (source)
2012-11-02 10:05:01 -0500 received badge  Teacher (source)
2012-11-02 10:05:01 -0500 received badge  Self-Learner (source)
2012-09-21 10:27:07 -0500 received badge  Famous Question (source)
2012-08-16 08:05:48 -0500 received badge  Famous Question (source)
2012-08-16 08:05:48 -0500 received badge  Notable Question (source)
2012-08-16 08:05:48 -0500 received badge  Popular Question (source)
2012-06-21 04:56:58 -0500 received badge  Great Question (source)
2012-06-19 10:35:45 -0500 received badge  Notable Question (source)
2012-04-24 22:05:03 -0500 commented answer Is there a user_detection package for Kinect on a mobile robot

Thank you for the link, but here the human is standing still. I need a tracker for when both the human and robot are moving.

2012-04-22 23:05:43 -0500 asked a question Is there a user_detection package for Kinect on a mobile robot

Hi there,

I was wondering if there is any ROS package available for user_detection on a moving platform. The openni tracker does not work properly if the robot itself is moving (it has a lot of false positives).

I do not need to find the skeleton of the user, only his position.

Thank you

2012-02-26 19:24:34 -0500 answered a question Getting static_map in amcl does not work

It was in the namespacing of the service. My amcl node ran in another namespace ("/turtlebot") then the map_server ("/"). It will work if both nodes run in the same namespace.

2012-02-24 01:06:54 -0500 asked a question Getting static_map in amcl does not work

Hi there,

Yesterday I installed the latest updates (including the gazebo and navigation stacks). Since then, the amcl does not work anymore. I traced the problem back to the service call AMCL makes to the map_server to get the map.

This is the code (it is the standard code) link:amcl_node.cpp line 495

The call is made (AMCL printing: Request for map failed; trying again...), but it never reaches the map_server service. I tried to call it manually via a terminal ( rosservice call static_map ) and then the service is carried out correctly and a response is sent.

I'm using Ubuntu 11.10 and ros-electric.

Thank you

2012-02-08 14:57:51 -0500 received badge  Popular Question (source)
2011-12-21 22:58:54 -0500 received badge  Supporter (source)
2011-12-21 22:58:50 -0500 commented answer Problem simulating kinect in gazebo
I am using a nVidia card now, and the depth data is correctly simulated. Thanks.
2011-12-13 19:52:29 -0500 received badge  Scholar (source)
2011-12-13 19:52:29 -0500 marked best answer Problem simulating kinect in gazebo

@Laurie, In general you want a discreet graphics card to run the simulations. There's a list of user tested cards at http://www.ros.org/wiki/simulator_gazebo/SystemRequirements

2011-12-13 19:38:12 -0500 received badge  Editor (source)
2011-12-08 18:32:46 -0500 commented answer Problem simulating kinect in gazebo
Using lspci I get "Intel Corporation 4 Series Chipset Integrated Graphics Controller"
2011-12-08 00:56:56 -0500 answered a question Simulated kinect

I have posted the same question. See here.

2011-12-04 14:33:46 -0500 received badge  Good Question (source)
2011-12-01 17:37:09 -0500 commented answer Problem simulating kinect in gazebo
Do you mean <shadowTechnique>none</shadowTechnique> and <shadows>false</shadows> within the <rendering:ogre> tag? If so, this solution does not work.
2011-11-29 19:16:10 -0500 received badge  Nice Question (source)
2011-11-29 18:30:53 -0500 received badge  Student (source)
2011-11-29 17:50:04 -0500 asked a question Problem simulating kinect in gazebo

Hi,

I am trying to simulate the turtlebot in gazebo. I am using the turtlebot_gazebo and stack for that (using robot.launch).

However, the depth data coming from the (simulated) kinect is all wrong. Every point in the point cloud published by the /camera/depth/points topic has a depth of 1.0. This is when the light source in the .world file is of the directional kind. When I use a point or spot light source, the bottom half of the pointcloud has depth of 1.0 and the top half either NaN or 1.0 (depending on whether something is within range or not). This is preventing me to use any SLAM algorithm in gazebo. I get the camera rgb image through correctly.

I am using Ubuntu 11.04 and electric.

Thanks a lot.