ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2021-02-18 22:45:54 -0500 | received badge | ● Famous Question (source) |
2020-07-31 05:06:59 -0500 | received badge | ● Notable Question (source) |
2020-06-25 11:47:13 -0500 | received badge | ● Taxonomist |
2020-06-17 16:53:03 -0500 | received badge | ● Famous Question (source) |
2020-05-29 17:55:28 -0500 | received badge | ● Famous Question (source) |
2020-05-22 06:01:57 -0500 | marked best answer | Adding a Kinect to my Gazebo simulation Hello All, I have a URDF model that is working in gazebo that i copied from the book Programming Robots with ROS. I would like to add a Kinect sensor to this model so i can simulate my robot but i cant for the life of me figure out how do this. Most resources that i found are like this example. which says that i should use the model found in the turtlebot_description folder but i am unsure what files i need and how exactly to import them into my URDF. I found this tutorial which edits the kinect model found in the gazebo models folder, and this worked perfectly so i know its possible. I think what i need is for someone to really spell it out for me what i need to add to my urdf and my launch file to add a kinect. Here is my URDF I would like to replace the hokuyo_link with a Kinect: (more) |
2020-04-18 04:11:51 -0500 | received badge | ● Famous Question (source) |
2020-04-08 05:37:41 -0500 | received badge | ● Popular Question (source) |
2020-04-07 01:24:19 -0500 | received badge | ● Notable Question (source) |
2019-12-10 03:50:26 -0500 | marked best answer | point cloud in wrong position relative to robot Hello fellow roboticists, I have been trying to get a kinect sensor working on my urdf model in gazebo. i have successfully attached the sensor thanks to my previous question here. My only issue is that the data is being shown in the wrong place in Rviz. There, the point cloud that should be displayed as being in front of the robot is being displayed above it and rotated about the z axis. How shoud i go about fixing this? Here is an image of what is happening (left is in Gazebo and the right is in Rviz): and here is my urdf model that i am using: (more) |
2019-11-14 04:50:46 -0500 | received badge | ● Famous Question (source) |
2019-11-14 04:50:20 -0500 | received badge | ● Notable Question (source) |
2019-11-13 13:33:36 -0500 | edited question | laser scan not being used for costmap laser scan not being used for costmap Hi All, I have a robot with a kinect sensor and a traditional lidar. The lidar is |
2019-11-13 13:22:08 -0500 | commented answer | laser scan not being used for costmap Thank you so much for your reply! i implemented your suggestions but im still not seeing the laser scan in my local cost |
2019-11-11 10:12:41 -0500 | received badge | ● Popular Question (source) |
2019-11-08 17:53:48 -0500 | received badge | ● Notable Question (source) |
2019-11-08 17:53:48 -0500 | received badge | ● Famous Question (source) |
2019-11-08 10:31:37 -0500 | received badge | ● Popular Question (source) |
2019-11-08 01:13:58 -0500 | edited question | laser scan not being used for costmap laser scan not being used for costmap Hi All, I have a robot with a kinect sensor and a traditional lidar. The lidar is |
2019-11-08 01:13:38 -0500 | asked a question | laser scan not being used for costmap laser scan not being used for costmap Hi All, I have a robot with a kinect sensor and a traditional lidar. The lidar is |
2019-11-02 08:10:46 -0500 | received badge | ● Notable Question (source) |
2019-10-28 12:06:15 -0500 | marked best answer | Getting Transform Exception after implementing voxel layer Hello friends, I recently made the switch from a 2d costmap to a 2.5d voxel layer costmap based on suggestions from one of my previous posts. I decided to go with the spatio_temporal_voxel_layer plugin because it promised to have improvements over the standard voxel_grid plugin and it seemed to have a lot of documentation. After implementing and some troubleshooting i was able to get everything working and i am able to see the voxel grid in rviz. The issue i am having is that i am contunually getting the following errors: i thing this is effecting the function of the rest of the robot because then i place a navigation waypoint it takes a minute or two to do anything if at all. results from rqt_tf_tree: costmap_common_params: global_params: (more) |
2019-10-24 20:46:22 -0500 | commented answer | Getting Transform Exception after implementing voxel layer Im afraid you might be right about the CPU usage (see the image added to the post) I had a sneaking suspicion considerin |
2019-10-24 20:40:25 -0500 | edited question | Getting Transform Exception after implementing voxel layer Getting Transform Exception after implementing voxel layer Hello friends, I recently made the switch from a 2d costmap |
2019-10-23 22:13:24 -0500 | commented answer | local costmap remember past obstacles Thank you! i ended up getting the spatio_temporal_voxel_layer to work but im gettin a TF Exception error now as a result |
2019-10-23 22:07:13 -0500 | marked best answer | local costmap remember past obstacles A bit of background about my setup: I have a robot with a lidar sensor at about 4 feet off the ground and a kinect sensor at about 1.5 feet off the ground. the two sensors are directly above each other with the kinect pointing forward. this was done in a attempt to give the robot a full 360 degree view of the room while also giving it a view of lower objects like table legs. i am using depthimage_to_laserscan to get a laser scan out of the kinect. then im using laser_merge to combine the scan from the lidar and the scan from the kinect into a single laser scan. what results is a lasercan of the room that shows data from whatever sensor sees a closer object. The Problem: Because the kinect has a small field of view, a couple degrees of rotation from the robot is enough for an object to go out of view. this results in the robot adding objects to the local costmap as it approaches but then clearing them as it rotates to go around. often it will then cut the corner really hard because it thinks there's nothing there and it hits the object. at this point it is often too close for the kinect to detect anything. Is there a way to have some sort of buffer where the local costmap remembers for a short period that something was there even when it leaves its view? and if not do you all have any suggestions of how i can prevent my robot from hitting things? Thanks! EDIT: Here are my config files common params local params global params base local planner params New common params after raytrace_range was added |
2019-10-23 22:05:58 -0500 | asked a question | Getting Transform Exception after implementing voxel layer Getting Transform Exception after implementing voxel layer Hello friends, I recently made the switch from a 2d costmap |
2019-10-22 07:16:59 -0500 | received badge | ● Famous Question (source) |
2019-10-21 23:17:38 -0500 | commented answer | local costmap remember past obstacles I think I might need a bit more guidance i tried including the VoxelCostmapPlugin - {name: voxel_layer, type: "costmap_ |
2019-10-20 20:27:43 -0500 | received badge | ● Famous Question (source) |
2019-10-16 00:14:47 -0500 | edited question | local costmap remember past obstacles local costmap remember past obstacles A bit of background about my setup: I have a robot with a lidar sensor at about 4 |
2019-10-16 00:14:27 -0500 | edited question | local costmap remember past obstacles local costmap remember past obstacles A bit of background about my setup: I have a robot with a lidar sensor at about 4 |
2019-10-16 00:12:22 -0500 | commented question | local costmap remember past obstacles I have played a bit with raytrace_range and obstacle_range. I tried setting raytrace_range to a lower value so that it w |
2019-10-15 22:19:08 -0500 | edited question | local costmap remember past obstacles local costmap remember past obstacles A bit of background about my setup: I have a robot with a lidar sensor at about 4 |
2019-10-15 11:35:18 -0500 | received badge | ● Notable Question (source) |
2019-10-15 09:28:26 -0500 | received badge | ● Popular Question (source) |
2019-10-14 21:50:16 -0500 | commented question | local costmap remember past obstacles Yes, because the laser (which can see over many tables and chairs) cant see the obstacles it gets cleared from the local |
2019-10-14 21:47:39 -0500 | edited question | local costmap remember past obstacles local costmap remember past obsticles A bit of background about my setup: I have a robot with a lidar sensor at about 4 |
2019-10-14 18:55:19 -0500 | commented question | Get local costmap to remember out of sight objects my apologies, im not sure how that happened |
2019-10-14 16:39:40 -0500 | asked a question | local costmap remember past obstacles local costmap remember past obsticles A bit of background about my setup: I have a robot with a lidar sensor at about 4 |
2019-10-14 16:35:32 -0500 | asked a question | Get local costmap to remember out of sight objects Get local costmap to remember out of sight objects A bit of background about my setup: I have a robot with a lidar senso |
2019-09-30 22:31:48 -0500 | commented question | Use ira_laser_tools to fuse the rplidar A2 and Kinect laser scan,but the output of the merger is only kinect's data. Im having the exact same issue. Did you ever find a solution? |