ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Phelipe's profile - activity

2018-09-17 17:08:40 -0500 received badge  Famous Question (source)
2016-06-15 11:01:39 -0500 received badge  Famous Question (source)
2016-05-08 16:04:45 -0500 marked best answer Odometry with RGBDSLAM for mapping white walls

Hi guys!

I'm trying to map a room with some white walls spaces (no texture) with turtlebot + kinect, and the problem is that the number of keypoints on it makes the robot lost the reference everytime it moves a bit.

I saw this question related to my problem but couldn't see any differences with ICP activated when mapping the wall. I was thinking if that's possible to "merge" two packages like RGBDSLAM and Robot_pose_ekf to build the map with the odometry information.

If the information about the position from the robot induce the matching of point clouds in rgbdslam, i guess it could perfectly map the big white wall.

Since i'm not an expert at ros packages and subscribed topics, i'd like some advice from who know or tried something like this before.

Is there another method could i use to do this?

EDIT:

The loop problem was related to wrong parameters at robot_odometry.launch, specifically at odom_frame_name and base_frame_name.

I'm able to create a map using the odometry information but there's still a problem, the floor contains so many "holes".

image description.

I'd like to know how can i capture the whole floor without raising the max_depth parameter (raising it would bring me some problems, because up to 3.8 meters at max_depth, the kinect captures ceiling). Is that related to ground_truth algorithm? How it works in rgbdslam?

This is my robot_odometry.launch file:

  <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
<!-- Input data settings-->
<param name="config/topic_image_mono"              value="/camera/rgb/image_mono"/> <!--could also be color -->
<param name="config/topic_image_depth"             value="/camera/depth_registered/sw_registered/image_rect_raw"/>


<param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="10"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->
<param name="config/max_keypoints"                 value="600"/><!-- Extract no more than this many keypoints -->
<param name="config/max_matches"                   value="300"/><!-- Keep the best n matches (important for ORB to set lower than max_keypoints) -->

<param name="config/min_sampled_candidates"        value="4"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="4"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="4"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="100"/>
<param name="config/cloud_creation_skip_step"      value="5"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->

<param name="config/cloud_display_type"            value="POINTS"/><!-- Show pointclouds as points (as opposed to TRIANGLE_STRIP) -->
<param name="config/pose_relative_to"              value="largest_loop"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->
<param name="config/backend_solver"                value="pcg"/><!-- pcg is faster and good for continuous ...
(more)
2016-04-06 20:21:49 -0500 received badge  Notable Question (source)
2016-04-06 20:21:49 -0500 received badge  Popular Question (source)
2015-09-28 07:26:24 -0500 received badge  Nice Question (source)
2015-07-29 16:40:14 -0500 received badge  Famous Question (source)
2015-07-17 11:47:24 -0500 commented answer Generation online 3D map from a RGB Camera

Unfortunately the probabilistic semi-dense mapping module isn't available for ORB SLAM yet, so i don't know a way to create the 3D map with this package, i'm testing LSD SLAM instead.

2015-07-16 15:46:56 -0500 received badge  Notable Question (source)
2015-07-13 08:38:39 -0500 received badge  Notable Question (source)
2015-07-11 03:02:47 -0500 received badge  Popular Question (source)
2015-07-10 11:00:40 -0500 commented question Generation online 3D map from a RGB Camera

Thanks for your response! Is this choice the better one since i need a 3D map to navigate through? I'm looking for more options to consider. (PS.: This package works for ROS indigo and fuerte, i'm using the hydro version, is that a problem?)

2015-07-10 09:10:10 -0500 asked a question Generation online 3D map from a RGB Camera

Hi!

I need to create a 3D map with an ArDrone 2.0 in real time (while the drone's still acquiring data), but its camera doesn't have a depth sensor, otherwise the RGBDSLAM would be a good choice, i guess. I've read some questions about that but didn't find a way to do it online. I found this video about monocular SLAM but it creates a depth map only.

Is there a package that i could use to help me with that?

Any thoughts?

Thanks in advance!!

2015-06-07 09:23:10 -0500 received badge  Famous Question (source)
2015-05-25 19:51:36 -0500 received badge  Popular Question (source)
2015-05-22 17:09:15 -0500 asked a question Multiple launchs in terminal

Hi! I'm running experiments in my project using turtlebot and kinect but i'd like to improve the speed of that by launching all the .launch files once for each experiment. Is that possible?

For example, i need to run roslaunch openni_launch openni.launch, roslaunch turtlebot_bringup minimal.launch, roslaunch turtlebot_teleop keyboard_teleop.launch, etc...

I know that for each terminal the bashrc is initialized, so running this in multiple terminals could be a problem (i guess). Any thoughts?

Thanks in advance!!

2015-05-15 07:04:58 -0500 received badge  Famous Question (source)
2015-05-13 13:04:48 -0500 received badge  Notable Question (source)
2015-04-27 16:57:18 -0500 received badge  Notable Question (source)
2015-04-24 06:41:38 -0500 received badge  Famous Question (source)
2015-04-23 14:48:31 -0500 asked a question Turtlebot Minimal.launch error

I have no idea what i've done with my turtlebot package, and somehow this still happens after re-installing it. Does anyone know how to solve this?? Thanks so much!

[turtlebot_node-4] restarting process
process[turtlebot_node-4]: started with pid [3780]
Traceback (most recent call last):
  File "/opt/ros/hydro/lib/create_node/turtlebot_node.py", line 58, in <module>
    import tf
  File "/opt/ros/hydro/lib/python2.7/dist-packages/tf/__init__.py", line 29, in <module>
    from listener import TransformListener, TransformerROS
  File "/opt/ros/hydro/lib/python2.7/dist-packages/tf/listener.py", line 37, in <module>
    from tf.msg import tfMessage
  File "/opt/ros/hydro/lib/python2.7/dist-packages/tf/msg/__init__.py", line 1, in <module>
    from ._tfMessage import *
  File "/opt/ros/hydro/lib/python2.7/dist-packages/tf/msg/_tfMessage.py", line 7, in <module>
    import geometry_msgs.msg
ImportError: No module named msg
[turtlebot_node-4] process has died [pid 3780, exit code 1, cmd /opt/ros/hydro/lib/create_node/turtlebot_node.py --respawnable cmd_vel:=mobile_base/commands/velocity turtlebot_node/sensor_state:=mobile_base/sensors/core imu/data:=mobile_base/sensors/imu_data imu/raw:=mobile_base/sensors/imu_data_raw __name:=turtlebot_node __log:=/home/phelipe/.ros/log/02d88ca4-e9f1-11e4-91d7-b4b67648d681/turtlebot_node-4.log].
log file: /home/phelipe/.ros/log/02d88ca4-e9f1-11e4-91d7-b4b67648d681/turtlebot_node-4*.log
respawning...
2015-04-22 12:28:35 -0500 answered a question Problem with amcl_demo.launch and depth data from camera/depth/image

I guess i found the solution:

Using the /camera/depth_registered/image_raw and setting depth_registration true in amcl_demo.launch, my CallBack function is converting the image to a 32-bit float type as follow: Dest = cv_bridge::toCvCopy(img,sensor_msgs::image_encodings::TYPE_32FC1); .

The difference between this topic and that /camera/depth/image is that this one gives me the distance in mm, and the other one provides the distance in m.

2015-04-22 09:10:48 -0500 received badge  Associate Editor (source)
2015-04-22 08:51:45 -0500 asked a question Problem with amcl_demo.launch and depth data from camera/depth/image

Hi! I'm having some troubles using the amcl_demo.launch, because i need to get the depth information from the camera/depth/image which the openni.launch initializes (and is initialized by amcl_demo.launch). But using the amcl_demo.launch the only topic i can acess to get this depth data is camera/depth/image_raw, but nothing happens in it apparently (i checked using image_view).

I've checked at this question and followed the instructions by changing that parameters in amcl_demo.launch but i still can't make this work.

Using the openni.launch, it's possible to get the depth information perfectly from the topic /camera/depth/image. To get the depth data as well as i did from /camera/depth/image (from another topic like /camera/depth_registered/image_raw), do i need to convert the image before using ?

My code calculates the distance from an object, and with the /camera/depth/image launched by openni.launch it's possible to calculate as follow (not exactly this, i just copied the important parts):

cv_bridge::CvImagePtr Dest;
Dest = cv_bridge::toCvCopy(img);
ROI = Dest->image(cv::Rect(rect));
cv::Scalar average = cv::mean(ROI.at<float>());

I'd like to know how can i calculate the distance from the topic /camera/depth/image_raw, or another one that provides the depth information. (changing as less as possible this code)

EDIT: With depth_registration activated in amcl_demo.launch, the topics /camera/depth_registered/image_raw and /camera/depth_registered/image doesn't publish data, the only one publishing is /camera/depth/image_raw but its type does not follow the one i'm using, i've tried to convert using this: Dest = cv_bridge::toCvCopy(img,sensor_msgs::image_encodings::TYPE_16UC1); but i got 0 meters distance. What am i doing wrong?

Thanks in advance!

2015-04-10 09:38:45 -0500 received badge  Popular Question (source)
2015-04-09 11:53:59 -0500 edited question Ajusting Turtlebot Speed in Autonomous Navigation

Hello!

I'm trying to change the turtlebot linear autonomous navigation speed according to a variable parameter without changing the route traced through Rviz, but i'm facing a problem. For the autonomous navigation i'm using the Amcl App as follow : here

Actually, my turtlebot speed should change when my external parameter changes, but the Amcl_demo navigation is publishing in the cmd_vel, and because of that it's not possible to publish the same type 'geometry_msgs::Twist base_cmd' to the teleop node or some other one responsible for the robot velocity. (I'd like to know the right one too)

Is it possible to change the max speed limit parameter from base_local_planner_params.yaml while my 'listener' is receiving data (in real time)? I'm not sure but I think it'll loads it once before the program starts =(

The amcl needs the velocity info to calculate the robot position in the 2D map, so publishing another speed over the amcl, excluding the fact it won't work, it'll make the robot lost in the map. Is that right?

I was thinking about publishing the robot speed from my 'listener' to amcl code, and this one (after the position calculus and stuff) sending the correct speed to turtlebot.

EDIT.: Once i need to change just the linear speed using the Twist, the way i'm looking for could be subscribing my node (publishing the speed through geometry_msgs/Twist type) instead of /navigation_velocity_smoother/raw_cmd_vel in the /move_base node?

The velocity_smoother.launch.xml :

<node pkg="nodelet" type="nodelet" name="navigation_velocity_smoother" args="load yocs_velocity_smoother/VelocitySmootherNodelet /mobile_base_nodelet_manager">

<rosparam file="$(find turtlebot_bringup)/param/defaults/smoother.yaml" command="load"/>

<remap from="navigation_velocity_smoother/smooth_cmd_vel" to="/cmd_vel_mux/input/navi"/>

<!-- Robot velocity feedbacks; use the default base configuration -->
<remap from="navigation_velocity_smoother/odometry" to="/odom"/>
<remap from="navigation_velocity_smoother/robot_cmd_vel" to="/mobile_base/commands/velocity"/>

</node> </launch>

How can I do that? Is there another way?

Thanks in Advance!