ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
2018-09-17 17:08:40 -0500 | received badge | ● Famous Question (source) |
2016-06-15 11:01:39 -0500 | received badge | ● Famous Question (source) |
2016-05-08 16:04:45 -0500 | marked best answer | Odometry with RGBDSLAM for mapping white walls Hi guys! I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit. I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information. If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall. Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before. Is there another method could i use to do this? EDIT: The loop problem was related to wrong parameters at robot_odometry.launch, specifically at odom_frame_name and base_frame_name. I'm able to create a map using the odometry information but there's still a problem, the floor contains so many "holes". . I'd like to know how can i capture the whole floor without raising the max_depth parameter (raising it would bring me some problems, because up to 3.8 meters at max_depth, the kinect captures ceiling). Is that related to ground_truth algorithm? How it works in rgbdslam? This is my robot_odometry.launch file: (more) |
2016-04-06 20:21:49 -0500 | received badge | ● Notable Question (source) |
2016-04-06 20:21:49 -0500 | received badge | ● Popular Question (source) |
2015-09-28 07:26:24 -0500 | received badge | ● Nice Question (source) |
2015-07-29 16:40:14 -0500 | received badge | ● Famous Question (source) |
2015-07-17 11:47:24 -0500 | commented answer | Generation online 3D map from a RGB Camera Unfortunately the probabilistic semi-dense mapping module isn't available for ORB SLAM yet, so i don't know a way to create the 3D map with this package, i'm testing LSD SLAM instead. |
2015-07-16 15:46:56 -0500 | received badge | ● Notable Question (source) |
2015-07-13 08:38:39 -0500 | received badge | ● Notable Question (source) |
2015-07-11 03:02:47 -0500 | received badge | ● Popular Question (source) |
2015-07-10 11:00:40 -0500 | commented question | Generation online 3D map from a RGB Camera Thanks for your response! Is this choice the better one since i need a 3D map to navigate through? I'm looking for more options to consider. (PS.: This package works for ROS indigo and fuerte, i'm using the hydro version, is that a problem?) |
2015-07-10 09:10:10 -0500 | asked a question | Generation online 3D map from a RGB Camera Hi! I need to create a 3D map with an ArDrone 2.0 in real time (while the drone's still acquiring data), but its camera doesn't have a depth sensor, otherwise the RGBDSLAM would be a good choice, i guess. I've read some questions about that but didn't find a way to do it online. I found this video about monocular SLAM but it creates a depth map only. Is there a package that i could use to help me with that? Any thoughts? Thanks in advance!! |
2015-06-07 09:23:10 -0500 | received badge | ● Famous Question (source) |
2015-05-25 19:51:36 -0500 | received badge | ● Popular Question (source) |
2015-05-22 17:09:15 -0500 | asked a question | Multiple launchs in terminal Hi! I'm running experiments in my project using turtlebot and kinect but i'd like to improve the speed of that by launching all the .launch files once for each experiment. Is that possible? For example, i need to run I know that for each terminal the bashrc is initialized, so running this in multiple terminals could be a problem (i guess). Any thoughts? Thanks in advance!! |
2015-05-15 07:04:58 -0500 | received badge | ● Famous Question (source) |
2015-05-13 13:04:48 -0500 | received badge | ● Notable Question (source) |
2015-04-27 16:57:18 -0500 | received badge | ● Notable Question (source) |
2015-04-24 06:41:38 -0500 | received badge | ● Famous Question (source) |
2015-04-23 14:48:31 -0500 | asked a question | Turtlebot Minimal.launch error I have no idea what i've done with my turtlebot package, and somehow this still happens after re-installing it. Does anyone know how to solve this?? Thanks so much! |
2015-04-22 12:28:35 -0500 | answered a question | Problem with amcl_demo.launch and depth data from camera/depth/image I guess i found the solution: Using the /camera/depth_registered/image_raw and setting depth_registration true in amcl_demo.launch, my CallBack function is converting the image to a 32-bit float type as follow: The difference between this topic and that /camera/depth/image is that this one gives me the distance in mm, and the other one provides the distance in m. |
2015-04-22 09:10:48 -0500 | received badge | ● Associate Editor (source) |
2015-04-22 08:51:45 -0500 | asked a question | Problem with amcl_demo.launch and depth data from camera/depth/image Hi! I'm having some troubles using the amcl_demo.launch, because i need to get the depth information from the camera/depth/image which the openni.launch initializes (and is initialized by amcl_demo.launch). But using the amcl_demo.launch the only topic i can acess to get this depth data is camera/depth/image_raw, but nothing happens in it apparently (i checked using image_view). I've checked at this question and followed the instructions by changing that parameters in amcl_demo.launch but i still can't make this work. Using the openni.launch, it's possible to get the depth information perfectly from the topic /camera/depth/image. To get the depth data as well as i did from /camera/depth/image (from another topic like /camera/depth_registered/image_raw), do i need to convert the image before using ? My code calculates the distance from an object, and with the /camera/depth/image launched by openni.launch it's possible to calculate as follow (not exactly this, i just copied the important parts): I'd like to know how can i calculate the distance from the topic /camera/depth/image_raw, or another one that provides the depth information. (changing as less as possible this code) EDIT: With depth_registration activated in amcl_demo.launch, the topics /camera/depth_registered/image_raw and /camera/depth_registered/image doesn't publish data, the only one publishing is /camera/depth/image_raw but its type does not follow the one i'm using, i've tried to convert using this: Thanks in advance! |
2015-04-10 09:38:45 -0500 | received badge | ● Popular Question (source) |
2015-04-09 11:53:59 -0500 | edited question | Ajusting Turtlebot Speed in Autonomous Navigation Hello! I'm trying to change the turtlebot linear autonomous navigation speed according to a variable parameter without changing the route traced through Rviz, but i'm facing a problem. For the autonomous navigation i'm using the Amcl App as follow : here Actually, my turtlebot speed should change when my external parameter changes, but the Amcl_demo navigation is publishing in the cmd_vel, and because of that it's not possible to publish the same type 'geometry_msgs::Twist base_cmd' to the teleop node or some other one responsible for the robot velocity. (I'd like to know the right one too) Is it possible to change the max speed limit parameter from base_local_planner_params.yaml while my 'listener' is receiving data (in real time)? I'm not sure but I think it'll loads it once before the program starts =( The amcl needs the velocity info to calculate the robot position in the 2D map, so publishing another speed over the amcl, excluding the fact it won't work, it'll make the robot lost in the map. Is that right? I was thinking about publishing the robot speed from my 'listener' to amcl code, and this one (after the position calculus and stuff) sending the correct speed to turtlebot. EDIT.: Once i need to change just the linear speed using the Twist, the way i'm looking for could be subscribing my node (publishing the speed through geometry_msgs/Twist type) instead of /navigation_velocity_smoother/raw_cmd_vel in the /move_base node? The velocity_smoother.launch.xml : <node pkg="nodelet" type="nodelet" name="navigation_velocity_smoother" args="load yocs_velocity_smoother/VelocitySmootherNodelet /mobile_base_nodelet_manager"> </node> </launch> How can I do that? Is there another way? Thanks in Advance! |