ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2017-04-01 17:27:52 -0500 | received badge | ● Famous Question (source) |
2016-08-30 07:45:49 -0500 | received badge | ● Notable Question (source) |
2016-08-01 19:32:55 -0500 | received badge | ● Popular Question (source) |
2016-08-01 13:11:56 -0500 | commented question | Move base for autonomous driving link text these people are successfully using ROS for autonomous driving. why do you think its not real time ? |
2016-08-01 05:33:09 -0500 | asked a question | Move base for autonomous driving Hi, This may be a very subjective question but is it a good idea to use move base for autonomous driving ? I have used move base with a robotic wheel chair that I developed and it works great. I have also used it on Husky and Turtlebot but we are not sure if we should use it for a car which is running at a higher speed compared to these indoor robot. In the outdoor scenario we have to generate the trajectory and compute collisions significantly faster then the indoor scenario. Any suggestions/comments or Experience of using move base here would be really helpful. Thanks. |
2016-04-10 19:53:14 -0500 | received badge | ● Famous Question (source) |
2015-10-26 13:33:55 -0500 | marked best answer | porblem with kinect.launch file in turtlebot bringup hi, I want to run gmapping on turtlebot so i started with the basic turtlebot tutorials when i bringup the turtlebot and run the kincet.launch file the data published by kinect on /camera/depth/points is wrong also when i try to see the depth image using image view it's all black but when i run openni.launch file form the openni_launch package the kinect is publishing correct point cloud, ss there any problem with kinect.launch file in turtlebot_bringup I am using fuerte on ubuntu 12.04. this is my kinect.launch file <launch> <node pkg="turtlebot_node" type="kinect_breaker_enabler.py" name="kinect_breaker_enabler"/> <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager" respawn="true"> <remap from="cloud_in" to="/camera/depth/points"/> <remap from="cloud_out" to="cloud_throttled"/> </node> <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true"> <remap from="cloud" to="/cloud_throttled"/> </node> <node pkg="nodelet" type="nodelet" name="kinect_laser_narrow" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true"> <remap from="cloud" to="/cloud_throttled"/> <remap from="scan" to="/narrow_scan"/> </node> </launch> akhil@ubuntu:~/fuerte_workspace/sandbox/turtlebot/turtlebot_bringup$ cat 2_kinect.launch <launch> <node pkg="turtlebot_node" type="kinect_breaker_enabler.py" name="kinect_breaker_enabler"/> <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/> <node pkg="nodelet" type="nodelet" name="openni_launch" args="load openni_camera/OpenNINodelet openni_manager" respawn="true"> </node> <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager" respawn="true"> <remap from="cloud_in" to="/camera/depth/points"/> <remap from="cloud_out" to="cloud_throttled"/> </node> <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true"> <remap from="cloud" to="/cloud_throttled"/> </node> <node pkg="nodelet" type="nodelet" name="kinect_laser_narrow" args="load pointcloud_to_laserscan/CloudToScan openni_manager" respawn="true"> <remap from="cloud" to="/cloud_throttled"/> <remap from="scan" to="/narrow_scan"/> </node> </launch> |
2015-09-24 01:49:33 -0500 | received badge | ● Popular Question (source) |
2015-09-24 01:49:33 -0500 | received badge | ● Notable Question (source) |
2015-09-10 09:03:48 -0500 | received badge | ● Famous Question (source) |
2015-08-17 20:35:20 -0500 | received badge | ● Famous Question (source) |
2015-05-24 03:14:33 -0500 | received badge | ● Famous Question (source) |
2015-05-24 03:14:33 -0500 | received badge | ● Notable Question (source) |
2015-03-04 09:32:57 -0500 | received badge | ● Notable Question (source) |
2015-01-30 00:04:47 -0500 | received badge | ● Enthusiast |
2015-01-29 03:22:09 -0500 | asked a question | how to use face_detector with kinect I am trying to use the face_detector package to detect humans in a scene. When i try to the run the launch file face_detect.rgbd.launch it's not detecting face and nothing is published on the topic face_detector/faces_cloud. any help here would be great. |
2014-12-22 18:10:58 -0500 | received badge | ● Popular Question (source) |
2014-12-20 12:19:43 -0500 | commented question | tf cannot find boost thread library Finally solved it by changing the boost_include and boost_library path in CMakeCache.txt file so that they point to the local boost installation |
2014-12-20 11:43:32 -0500 | asked a question | tf cannot find boost thread library I am trying to build ros hydro from source on ubuntu 12.04 it requires boost 1.46.1 but the version of boost installed on my machine is 1.48 so I downloaded boost 1.46.1 and installed it in a local folder. I also added skip-keys flag in rosdep to stop it from installing boost (because it would remove the newer version). After installing these dependencies I did ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release as mentioned here : http://wiki.ros.org/hydro/Installatio... this started building plain cmake and catkin packages but gave an error while building tf:
CMakeFiles/tf_monitor.dir/src/tf_monitor.cpp.o: In function I specified the boost library path using CMAKE_LIBRARY_PATH variable cmake is able to find the local boost installation using this. I also tried changing the CMakeCahe.txt file in tf package by specifying the location of libboost_thread.so but still no luck can someone please help . |
2014-12-05 13:38:41 -0500 | received badge | ● Popular Question (source) |
2014-12-01 17:35:06 -0500 | commented answer | problem building ar_track_alvar from source thanks for your answer yes i recently updated to hydro and was using groovy branch !! |
2014-12-01 16:52:50 -0500 | asked a question | problem building ar_track_alvar from source I am trying to build ar_track_alvar from source which depends on pcl_ros. I recently updated all the packages but this package seems to depend on previous version of pcl how can I build this package without going back to the previous version of pcl. the erro is: (more) |
2014-06-25 04:13:31 -0500 | received badge | ● Notable Question (source) |
2014-06-24 08:28:52 -0500 | received badge | ● Popular Question (source) |
2014-06-24 05:35:23 -0500 | received badge | ● Commentator |
2014-06-24 05:35:23 -0500 | commented answer | No cmake config file in gaussian process package hey i migrated the gaussian processs package but im not able to include headers files from this package in my prediction package i tried using the include_directories flag but it's not working i had to give the full path of header file to include it . |
2014-06-23 13:54:47 -0500 | commented answer | No cmake config file in gaussian process package thanks for your reply and yes I am using catkin i will try to port gaussian process to catkin .. |
2014-06-23 02:12:30 -0500 | received badge | ● Supporter (source) |
2014-06-22 14:52:19 -0500 | asked a question | No cmake config file in gaussian process package Hi i want to use gaussian process for obstacle prediction but i am not able to integrate gaussian_process package provided by freiburg_tools with my prediction package as this package doesn't have any cmake config file. Is there any way around it ? |
2014-05-03 06:41:33 -0500 | asked a question | I want to test the local planner for a wheel chair that i have been working on but the navigation stack of ros looks quite complicated can anyone suggest how should I break it down for it's better understanding. I am writing a local planner which takes into consideration the social norms of navigation. I want to integrate it with ros navigation stack. |
2014-04-20 06:50:59 -0500 | marked best answer | PointCloud 2 data in turtlebot simulation hello, I am running turtelbot simulation and when i Use rviz to visualize pointcloud2 data than it gives incorrect point cloud . How i can fix this? any tips would be appreciated As i am new to ros. In the world image there is no object just in front of the robot but rviz shows some points there ,also when i try to run the robot this "patch" of ploint cloud keeps moving along with the robot . because of this I am not able to build the correct octomap as it uses pointclud2 data. I observed that the pointcloud is not forming correctly. Even the clouds of the ground are shown as walls. Below is the pic of the situation with both world and rviz view of the pointclouds. Any help here is appreciated!!! My situation is shown in the attachments. ) Problem not with the frame of reference but with the pointcloud simulated from the gazebo. It looks like a bug. Any suggestion to overcome this is appreciated!!! |
2014-01-28 17:30:40 -0500 | marked best answer | implementing slam on differential drive robot hi, |
2014-01-28 17:28:40 -0500 | marked best answer | kinect + ubuntu 12.04 + fuerte I recently installed ubuntu 12.04 and fuerte on it, I am using openni launch package but it's giving me a lot of errors also there is no topic named /camera/depth/points because of which rviz is giving error when i am subscribing to pointCloud2. also it seems that the IR projector of the kinect is not working, I tried this with other kinect but the same problem occurs. |