ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

simff's profile - activity

2022-08-02 06:22:53 -0500 received badge  Favorite Question (source)
2021-08-18 03:07:55 -0500 received badge  Famous Question (source)
2019-08-23 07:17:23 -0500 received badge  Notable Question (source)
2019-08-23 07:17:23 -0500 received badge  Popular Question (source)
2019-05-20 01:35:46 -0500 marked best answer How to invert coordinate frames of a Transform

I have a fixed transform coming from the ar_track_alvar pkg, that gives me the markers pose into the camera_link. I want to get the inverse of it and save the exact data but reversed. The problem is that being the marker frame reversed, the inverse gives me wrong result.

How to have x and y markers coordinates to be consistent with the camera_link ones? What is the easiest way to achieve this? I want to have x pointing forward (red) and y on the left (green).

image description

Thanks for anyone helping me out!

Cheers, Simone.

2019-04-08 01:21:07 -0500 marked best answer robot_localization: unable to use odometry/filtered as input

Hi,

I'm using ROS Kinetic and a Clearpath Husky robot. I have an already running ekf_localization_node on my robot that gives me the base_link -> odom and outputs an odometry/filtered topic. Now, on the top of it, I want to use robot_localization to fuse global absolute data with markers.

I want to use the odometry data for my second ekf_global, hence I want the odometry/filtered topic to be subscribed by this node. Here is my launch file:

<?xml version="1.0"?>
<launch> 
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_se" clear_params="true">
<rosparam command="load" file="$(find robot_localization)/params/my_ekf.yaml" />
<remap from="odometry/filtered" to="finalpose"/>
</launch>

The param file my_ekf.yaml is below:

frequency: 30

sensor_timeout: 1

two_d_mode: true

debug: false

publish_tf: true

publish_acceleration: false

map_frame: map              # Defaults to "map" if unspecified

odom_frame: odom            # Defaults to "odom" if unspecified

base_link_frame: base_link  # Defaults to "base_link" if unspecified

world_frame: map           # Defaults to the value of odom_frame if unspecified

dynamic_process_noise_covariance: true

odom0: od
odom0_config: [true,  true,  false,
               false, false, true,
               false, false, false,
              false, false, false,
               false, false, false]      
odom0_differential: false
odom0_relative: false
odom0_queue_size: 40

odom0_nodelay: false

process_noise_covariance: [0.0001, 0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0.0001, 0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0.0001, 0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0.0001, 0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0.0001, 0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0.0001, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0,    0.0001, 0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0,    0,     0.0001, 0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0,    0,     0,     0.0001, 0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0,    0,     0,     0,    0.0001, 0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0.0001, 0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0.0001, 0,    0,    0,
                           0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0.0001, 0,    0,
                           0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0.0001, 0,
                           0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0.0001]


initial_estimate_covariance: [1e-2, 0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                              0,    1e-2, 0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                              0,    0,    1e-2, 0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                              0,    0,    0,    1e-2, 0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                              0,    0,    0,    0,    1e-2, 0,    0,    0,    0,    0,     0 ...
(more)
2019-04-08 01:16:41 -0500 marked best answer ar_track_alvar fails to find IndividualMarkers with Kinect

Hello guys,

I cannot find a credible reason why my Kinect camera is not able to detect AR Markers, I am using ar_track_alvar with ROS Kinetic, Gazebo 7 and Husky robot. To test the package I placed a single marker (size 50 cm) in front of my robot, as seen in this Figure. Markers have been generated in the Gazebo simulator by running the script, and later inserting them as Gazebo models (SDF files). My launch file looks like this

<launch>
<arg name="marker_size" default="5" />
<arg name="max_new_marker_error" default="0.05" />
<arg name="max_track_error" default="0.2" />

<arg name="cam_image_topic" default="/camera/depth/points" /> <!-- cam_image_topic requires the type sensor_msgs/Image , put /camera/rgb/image_raw or /camera/depth/image_raw-->
<arg name="cam_info_topic" default="/camera/depth/camera_info" /> <!--/kinect_head/rgb/camera_info put /camera/rgb/camera_info-->
<arg name="output_frame" default="/base_link" /> <!-- camera_link also tried-->

<node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkers" respawn="false" output="screen">
    <param name="marker_size"           type="double" value="$(arg marker_size)" />
    <param name="max_new_marker_error"  type="double" value="$(arg max_new_marker_error)" />
    <param name="max_track_error"       type="double" value="$(arg max_track_error)" />
    <param name="output_frame"          type="string" value="$(arg output_frame)" />

    <remap from="camera_image"  to="$(arg cam_image_topic)" />
    <remap from="camera_info"   to="$(arg cam_info_topic)" /> 
</node>

</launch>

Instead of depth/points and depth/camera_info, I also tried to use rgb/image_raw and rgb/camera_info in the corresponding image_topic and info_topic fields, with no results. The marker size corresponds to the exact size in the Gazebo model file, so I am supposing there is no error in there. By echoing the Kinect topics I get non empty messages, hence the plugin is working fine. A double and visual check for this is visible in this Rviz Figure as well.

Figure

As seen both the rgb and depth camera can output the marker.

What I cannot get is the marker´s pose, and also the marker´s frame in Rviz; this simply mean that the tag is not detected at all. The /visualization_marker topic is infact empty.

Here the graph with topics is visible.

I am really stuck with it, any help would be really appreciated!

Regards, Simone.

2019-04-07 22:36:58 -0500 marked best answer Which unity do the linear and angular speed have in the teleop_twist_keyboard node?

Hi,

I was wondering the unity of linear and angular speed when running the teleop_twist_keyboard node in ROS, are there m/s and rad/s? I could not find anywhere this information.

Additionally, as time dependent speeds, do they use the real time or simulated time?

Cheers, Simone.

2019-04-07 22:36:21 -0500 received badge  Notable Question (source)
2018-11-20 07:18:42 -0500 received badge  Famous Question (source)
2018-08-18 08:39:20 -0500 received badge  Famous Question (source)
2018-06-21 16:31:55 -0500 marked best answer Why does the nav2d_karto mapper.yaml have particles?

Hi,

I cannot figure out theoretically why does the Karto Mapper provided in the nav2d package uses also particles for mapping. From the basic SLAM theory I've studied, a graph-based SLAM algorithm does not need particles to maintain and compute the pose of the robot, so what are those particles intended for? Can someone quickly sketch me the main steps of this Karto algorithms used in this package?

Sorry for the quite simple question,

Regards

2018-06-21 16:31:55 -0500 received badge  Scholar (source)
2018-06-21 12:53:50 -0500 commented answer Why does the nav2d_karto mapper.yaml have particles?

Hi @smac, thanks for replying! So just for me to fully understand, the front end and back end of the Karto are only use

2018-06-21 08:27:00 -0500 asked a question Why does the nav2d_karto mapper.yaml have particles?

Why does the nav2d_karto mapper.yaml have particles? Hi, I cannot figure out theoretically why does the Karto Mapper p

2018-06-07 23:09:38 -0500 received badge  Notable Question (source)
2018-06-07 23:09:38 -0500 received badge  Popular Question (source)
2018-05-20 21:04:12 -0500 received badge  Famous Question (source)
2018-05-09 13:34:19 -0500 received badge  Notable Question (source)
2018-03-27 19:12:14 -0500 received badge  Famous Question (source)
2018-03-19 04:46:45 -0500 marked best answer How to make robot_localization work with AR tags

Hi,

We are working on a project and we have big issues using robot_localization to work as a global localizer. We are using ROS Kinetic, Gazebo 7.0, a Kinect camera and ekf_localization_node as filter node. The final goal is get out from the filter absolute robot positions by fusing odometry sensor data and AR codes distance information coming from ar_track_alvar.

No matter how we feed the filter with marker's data, the map frame just heavily jumps, and as consequence the final robot's pose is erroneous. When no tags are detected, so only feeding Ekf with the odometry, the map frame stays in place, as seen here:

The ekf.param file is seen below:

frequency: 40
sensor_timeout: 1
two_d_mode: true
#transform_time_offset: 0.0
#print_diagnostics: true
debug: false
publish_tf: true
publish_acceleration: false
map_frame: map              # Defaults to "map" if unspecified
odom_frame: odom            # Defaults to "odom" if unspecified
base_link_frame: base_link  # Defaults to "base_link" if unspecified
world_frame: map          # Defaults to the value of odom_frame if unspecified

dynamic_process_noise_covariance: true

odom0: od
odom0_config: [true,  true,  false,
               false, false, true,
               false, false, false,
              false, false, false,
               false, false, false]
odom0_differential: true
odom0_relative: false
odom0_queue_size: 40
odom0_nodelay: false

odom1: marker1
odom1_config: [true,  true,  false,
              false, false, true,
               false, false, false,
              false, false, false,
               false, false, false]
odom1_differential: false
odom1_relative: false
odom1_queue_size: 40
odom1_nodelay: false

process_noise_covariance: [0.0001, 0,    0,    0,    0,    0, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0.0001, 0,    0,    0,    0, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0.0001, 0,    0,    0, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0.0001, 0,    0, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0.0001, 0, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0.0001, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0, 0.0001, 0,     0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0, 0,     0.0001, 0,    0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0, 0,     0,     0.0001, 0,    0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0, 0,     0,     0,    0.0001, 0,    0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0, 0,     0,     0,    0,    0.0001, 0,    0,    0,    0,
                           0,    0,    0,    0,    0,    0, 0,     0,     0,    0,    0,    0.0001, 0,    0,    0,
                           0,    0,    0,    0,    0,    0, 0,     0,     0,    0,    0,    0,    0.0001, 0,    0,
                           0,    0,    0,    0,    0,    0, 0,     0,     0,    0,    0,    0,    0,    0.0001, 0,
                           0,    0,    0,    0,    0,    0, 0,     0,     0,    0,    0,    0,    0,    0,    0.0001]


initial_estimate_covariance: [1, 0,    0,    0,    0,    0, 0,    0,    0,    0,     0,     0,     0,    0,    0,
                              0,    1, 0,    0,    0,    0, 0,    0,    0,    0,     0,     0,     0,    0,    0,
                              0,    0,    1e-9, 0,    0,    0, 0,    0,    0,    0 ...
(more)
2018-03-14 08:33:01 -0500 received badge  Popular Question (source)
2018-03-13 08:03:31 -0500 edited question Which unity do the linear and angular speed have in the teleop_twist_keyboard node?

Which unity do the linear and angular speed have in the teleop_twist_keyboard node? Hi, I was wondering the unity of li

2018-03-13 08:02:46 -0500 asked a question Which unity do the linear and angular speed have in the teleop_twist_keyboard node?

Which unity do the linear and angular speed have in the teleop_twist_keyboard node? Hi, I was wondering the unity of li

2018-03-12 02:27:55 -0500 answered a question How to make robot_localization work with AR tags

Solved Thanks for the reply Mr. Moore, the issue was indeed a missing reference of the fiducial markers to our world po

2018-03-08 10:01:10 -0500 received badge  Notable Question (source)
2018-03-08 02:56:41 -0500 edited question Why does the ar_track_alvar detect the same Markers at different positions?

Why does the ar_track_alvar detect the same Markers at different positions? Hi, I am using ROS Kinetic on a Linux 16.0

2018-03-06 06:10:19 -0500 asked a question Why does the ar_track_alvar detect the same Markers at different positions?

Why does the ar_track_alvar detect the same Markers at different positions? Hi, I am using ROS Kinetic on a Linux 16.0

2018-03-05 08:26:40 -0500 received badge  Popular Question (source)
2018-03-04 15:21:57 -0500 commented question Indoor navigation using odom & beacons - how to fuse it together?

Hi @shreks7 did you manage to solve your issues? I want to do something similar, see my query https://answers.ros.org/qu

2018-03-02 02:44:45 -0500 edited question How to make robot_localization work with AR tags

How to make robot_localization to work with AR tags Hi, We are working on a project and we have big issues using robot_

2018-03-02 02:44:01 -0500 edited question How to make robot_localization work with AR tags

Global localization using robot_localization node and AR tag Hi, We are working on a project and we have big issues usi

2018-03-02 02:42:57 -0500 edited question How to make robot_localization work with AR tags

Global localization using robot_localization node and AR tags Hi, We are working on a project and we have big issues us

2018-03-01 05:31:12 -0500 commented question tf view_frames syntax error

An alternative way to view tf frames is: rosrun rqt_tf_tree rqt_tf_tree. This requires to have this into your workspace

2018-03-01 05:09:43 -0500 edited question How to make robot_localization work with AR tags

Global localization using robot_localization node and AR tags Hi, We are working on a project and we have big issues us

2018-03-01 05:06:01 -0500 asked a question How to make robot_localization work with AR tags

Global localization using robot_localization node and AR tags Hi, We are working on a project and we have big issues us

2018-03-01 03:16:54 -0500 received badge  Popular Question (source)
2018-02-28 16:23:09 -0500 received badge  Notable Question (source)
2018-02-27 07:23:41 -0500 received badge  Popular Question (source)
2018-02-27 04:29:37 -0500 commented question How to invert coordinate frames of a Transform

Hi @tuandl thanks for the comment! Not yet, I do not know the way to code it, I am new to ROS and tf, How would you to t

2018-02-27 04:29:06 -0500 commented question How to invert coordinate frames of a Transform

Hi @tuandl thanks for the comment! Not yet, I do not now the way to code it, I am new to ROS and tf, How would you to th

2018-02-27 04:26:13 -0500 received badge  Commentator
2018-02-27 04:26:13 -0500 commented question How to invert coordinate frames of a Transform

Hi @tuandi thanks for the comment! Not yet, I do not now the way to code it, I am new to ROS and tf, How would you to th

2018-02-27 04:02:52 -0500 edited question How to invert coordinate frames of a Transform

How to invert coordinate frames of a Transform I have a fixed transform coming from the ar_track_alvar pkg, that gives m

2018-02-27 04:02:52 -0500 received badge  Associate Editor (source)
2018-02-27 04:01:41 -0500 asked a question How to invert coordinate frames of a Transform

How to invert coordinate frames of a Transform I have a fixed transform coming from the ar_track_alvar pkg, that gives m

2018-02-23 08:43:52 -0500 commented answer robot_localization: unable to use odometry/filtered as input

Hi, good point, but for my case I want to have distinct map and odom frames :)

2018-02-22 03:38:42 -0500 received badge  Famous Question (source)