Ask Your Question

TSC's profile - activity

2017-07-28 17:18:53 -0600 marked best answer Convert /tf to nav_msgs/Odometry


I want to use robot_pose_ekf to receive data from a IMU and a visual odometry source. This last one only publishes a /tf between to frames and I want to get the geometry_msgs/PoseWithCovariance from it.

Is there a proper way of converting a /tf to a nav_msgs/Odometry message?

Thanks in advance!

EDIT: Thanks all for the answers! I'm going to take a comment in a single answer. I'm using a Asus Xtion Pro Live with rgbdslam_v2 package. This last one publishes /tf between a fixed_frame and a choosen frame from us at a 10hz rate. I need this to be an odom type of message to insert it in robot_pose_ekf.

I don't really know if I'm using the right package for the purpose, but I lost a serious amount of time porting it to an ARM platform (an Odroid) and I didn't want to abandoned this one just because it doesn't give me odometry out-of-the-box. Besides, changing to, for example, to ccny-rgbd-tools will probably mean a great amount of work again to get it ported to ARM.

So can anyone point me out how can I get a proper convertion from tf to Odometry. I'm not really a hardcoder so point out some parts of code in the ccny_rgbd_tools visual_odometry.cpp would be good.

Thanks again!

2017-04-20 16:27:24 -0600 marked best answer Error in openni_grabber.cpp while installing ROS Groovy from source on Ubuntu 13.10

Hi there,

When running the command ./src/catkin/bin/catkin_make_isolated --install I get the following error:

    Linking CXX executable ../../../bin/pcl_ply2obj
    [  4%] Built target pcl_ply2obj
    Linking CXX executable ../../../bin/pcl_ply2raw
    make[2]: ** [octree/CMakeFiles/pcl_octree.dir/src/octree_impl.cpp.o] Erro 1
    make[1]: ** [octree/CMakeFiles/pcl_octree.dir/all] Erro 2
    [  4%] Built target pcl_ply2raw
    Linking CXX executable ../../../bin/pcl_ply2ply
    [  4%] Built target pcl_ply2ply
    make: ** [all] Erro 2
    <== Failed to process package 'pcl': 
      Command '/home/nuno/ros_catkin_ws/install_isolated/ make -j8 -l8' returned non-zero exit status 2

    Reproduce this error by running:
    ==> cd /home/nuno/ros_catkin_ws/build_isolated/pcl && /home/nuno/ros_catkin_ws/install_isolated/ make -j8 -l8
Command failed, exiting.

I corrected the above error using the solution of the Eigen h file, now I get the following:

    /home/nuno/ros_catkin_ws_groovy/src/pcl/common/include/pcl/impl/point_types.hpp:143:9: note: attribute for ‘union pcl::_PointSurfel::<anonymous>’ must follow the ‘union’ keyword
   union { \
/home/nuno/ros_catkin_ws_groovy/src/pcl/common/include/pcl/impl/point_types.hpp:1264:5: note: in expansion of macro ‘PCL_ADD_NORMAL4D’
     PCL_ADD_NORMAL4D; // This adds the member normal[3] which can also be accessed using the point (which is float[4])
/home/nuno/ros_catkin_ws_groovy/src/pcl/io/src/openni_grabber.cpp: In member function ‘pcl::PointCloud<pcl::PointXYZ>::Ptr pcl::OpenNIGrabber::convertToXYZPointCloud(const boost::shared_ptr<openni_wrapper::DepthImage>&) const’:
/home/nuno/ros_catkin_ws_groovy/src/pcl/io/src/openni_grabber.cpp:576:63: error: no matching function for call to ‘boost::shared_array<short unsigned int>::shared_array(int)’
     static boost::shared_array<unsigned short> depth_buffer (0);
/home/nuno/ros_catkin_ws_groovy/src/pcl/io/src/openni_grabber.cpp:576:63: note: candidates are:
In file included from /usr/include/boost/shared_array.hpp:17:0,
                 from /home/nuno/ros_catkin_ws_groovy/src/roscpp_serialization/include/ros/serialized_message.h:33,
                 from /home/nuno/ros_catkin_ws_groovy/src/roscpp_serialization/include/ros/serialization.h:36,
                 from /home/nuno/ros_catkin_ws_groovy/devel_isolated/std_msgs/include/std_msgs/Header.h:47,
                 from /home/nuno/ros_catkin_ws_groovy/src/pcl/common/include/pcl/point_cloud.h:45,
                 from /home/nuno/ros_catkin_ws_groovy/src/pcl/io/src/openni_grabber.cpp:42:
/usr/include/boost/smart_ptr/shared_array.hpp:126:5: note: template<class Y> boost::shared_array<T>::shared_array(const boost::shared_array<Y>&, boost::shared_array<T>::element_type*)
     shared_array( shared_array<Y> const & r, element_type * p ) BOOST_NOEXCEPT : px( p ), pn( )
/usr/include/boost/smart_ptr/shared_array.hpp:126:5: note:   template argument deduction/substitution failed:
/home/nuno/ros_catkin_ws_groovy/src/pcl/io/src/openni_grabber.cpp:576:63: note:   mismatched types ‘const boost::shared_array<T>’ and ‘int’
     static boost::shared_array<unsigned short> depth_buffer (0);
In file included from /usr/include/boost/shared_array.hpp:17:0,
                 from /home/nuno/ros_catkin_ws_groovy/src/roscpp_serialization/include/ros/serialized_message.h:33,
                 from /home/nuno/ros_catkin_ws_groovy/src/roscpp_serialization/include/ros/serialization.h:36,
                 from /home/nuno/ros_catkin_ws_groovy/devel_isolated/std_msgs/include/std_msgs/Header.h:47,
                 from /home/nuno/ros_catkin_ws_groovy/src/pcl/common/include/pcl/point_cloud.h:45,
                 from /home/nuno/ros_catkin_ws_groovy/src/pcl/io/src/openni_grabber.cpp:42:
/usr/include/boost/smart_ptr/shared_array.hpp:111:5: note: template<class Y> boost::shared_array<T>::shared_array(const boost::shared_array<Y>&, typename boost::detail ...
2016-05-09 20:50:29 -0600 marked best answer Integrating Hector Quadrotor Gazebo Simulation with RGB-D SLAM

Hi there,

I'm trying to integrate RGB-D SLAM with the Hector Quadrotor model simulation on Gazebo. The idea is to launch the model of the quadrotor with a kinect and use the visual data recovered by the simulated camera in the running RGB-D SLAM. What's supposed to be done so this can function right? What topics should be called in each launched file so can RGB-D SLAM Bcan receive data from the simulated Gazebo Kinect?

Thank you! Best regards

2016-05-08 15:22:50 -0600 marked best answer Uncontrollable quadrotor_with_asus in Gazebo - hector_quadrotor

Hi there,

I'm trying to get a simulation working with hector_quadrotor, which urdf model is the on with only an Asus Xtion camera. I adapted the pr2_teleop code so to have a altitude control of the quad in the simulation on Gazebo (thorugh twist off course, sending /cmd_vel).

It appears to work good, for example, to the quad with the hokuyo only, but when I spawn any quadrotor with asus, giving it a throttle command (which influences the vertical linear vel), it tombs and rolls straight in front, without being able to control it... then it crashes off course. With the hokuyo only, for example, that doesn't happen.

I thought that could be a substantial difference between the to urdf's (by the way, I converted from xacro to urdf using the xacro tool) but the only difference I found is in the sensor being used, of course. Here are propulsion model parameters for both quads:

k_m     = -7.01163e-05
k_t     = 0.0153369
CT2s    = -0.013077
CT1s    = -0.00025224
CT0s    = 1.53819e-05
Psi     = 0.00724218
J_M     = 2.57305e-05
R_A     = 0.201084
l_m     = 0.275
alpha_m = 0.104864
beta_m  = 0.549262

and the quadrotor drag model parameters for both quads also:

C_wxy = 0.12
C_wz = 0.1
C_mxy = 0.0741562
C_mz = 0.0506433

Any tips on finding the problem? Thanks in advance!

2016-05-08 15:20:38 -0600 marked best answer Segfault on roslaunch rgbdslam - PCL 1.7 related

I'm trying to run rgbdslam (v2 with proper adaptations) in Ubuntu 13.10, with ROS Indigo and PCL 1.7. Getting gdb running to bt the reason why the launch crashes, I get the following log:

Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
Reading symbols from /home/nuno/AIMAVProject_hydro_ws/devel/lib/rgbdslam/rgbdslam...(no debugging symbols found)...done.
Starting program: /home/nuno/AIMAVProject_hydro_ws/devel/lib/rgbdslam/rgbdslam __name:=rgbdslam __log:=/home/nuno/.ros/log/243e0bb4-ff85-11e3-ac9d-0090f5eb9422/rgbdslam-2.log
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7ffff7ffa000
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/".
[New Thread 0x7fffd3b3c700 (LWP 4859)]
[New Thread 0x7fffd333b700 (LWP 4860)]
[New Thread 0x7fffd2b3a700 (LWP 4861)]
[New Thread 0x7fffd2339700 (LWP 4862)]
[New Thread 0x7fffd1b38700 (LWP 4863)]
[New Thread 0x7fffd1337700 (LWP 4864)]
[New Thread 0x7fffd0b36700 (LWP 4865)]
[New Thread 0x7fffd0335700 (LWP 4866)]

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff75defa0 in boost::math::lanczos::lanczos_initializer<boost::math::lanczos::lanczos17m64, long double>::init::init() ()
   from /usr/local/lib/
(gdb) bt
#0  0x00007ffff75defa0 in boost::math::lanczos::lanczos_initializer<boost::math::lanczos::lanczos17m64, long double>::init::init() ()
   from /usr/local/lib/
#1  0x00007ffff75595f9 in _GLOBAL__sub_I_hdl_grabber.cpp ()
   from /usr/local/lib/
#2  0x00007ffff7de9856 in call_init (l=l@entry=0x7ffff7fbd4e8, 
    argc=argc@entry=3, argv=argv@entry=0x7fffffffd968, 
    env=env@entry=0x7fffffffd988) at dl-init.c:84
#3  0x00007ffff7de9910 in call_init (env=<optimized out>, 
    argv=<optimized out>, argc=<optimized out>, l=0x7ffff7fbd4e8)
    at dl-init.c:55
#4  _dl_init (main_map=0x7ffff7ffe268, argc=3, argv=0x7fffffffd968, 
    env=0x7fffffffd988) at dl-init.c:133
#5  0x00007ffff7ddb66a in _dl_start_user () from /lib64/
#6  0x0000000000000003 in ?? ()
#7  0x00007fffffffdd42 in ?? ()
#8  0x00007fffffffdd7f in ?? ()
#9  0x00007fffffffdd90 in ?? ()
#10 0x0000000000000000 in ?? ()
(gdb) continue
[Thread 0x7fffd1337700 (LWP 4864) exited]
[Thread 0x7fffd0335700 (LWP 4866) exited]
[Thread 0x7fffd333b700 (LWP 4860) exited]
[Thread 0x7ffff7f5fac0 (LWP 4854) exited]
[Thread 0x7fffd0b36700 (LWP 4865) exited]
[Thread 0x7fffd1b38700 (LWP 4863) exited]
[Thread 0x7fffd2b3a700 (LWP 4861) exited]
[Thread 0x7fffd3b3c700 (LWP 4859) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.

Any tips on how to resolve this? Thanks in advance

2016-05-08 15:13:23 -0600 marked best answer RGBDSLAM Indigo support

Hi to all,

Well after having some conversations with @FelixEndres, and since I need RGBDSlam working on ROS Indigo so to implement it in a Odroid running Ubuntu 13.10, I decided to try to make the migration of the source code to Indigo, which will certainly be useful to others to come.

Some of the initial problems wore corrected by downloading and compiling g2o and OpenCV-2.4.9 from source, so to rgbdslam can detect them and then use them. But then, I got some errors on the code that should be resolved by adapting the code to the OpenCV architecture, but I will need some help on those, since Felix is busy with is PhD and I need this to my MSc thesis. So a help from someone with knowledge on OpenCV functions and headers will be great.

So, when doing catkin_make on my ws I get the following list of errors:

[ 77%] [ 78%] [ 79%] [ 79%] Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/openni_listener.o
Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/node.o
Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/misc.o
Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/sift_gpu_wrapper.o
In file included from /home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.cpp:18:0:
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.h:49:40: error: ‘cv::vector’ has not been declared
  void detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const cv::Mat& mask = cv::Mat()) const;
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.h:49:46: error: expected ‘,’ or ‘...’ before ‘<’ token
  void detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const cv::Mat& mask = cv::Mat()) const;
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.cpp:108:55: error: ‘cv::vector’ has not been declared
 void SiftGPUWrapper::detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const Mat& mask) const {
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.cpp:108:61: error: expected ‘,’ or ‘...’ before ‘<’ token
 void SiftGPUWrapper::detect(const cv::Mat& image, cv::vector<cv::KeyPoint>& keypoints, std::vector<float>& descriptors, const Mat& mask) const {
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.cpp: In member function ‘void SiftGPUWrapper::detect(const cv::Mat&, int) const’:
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.cpp:111:9: error: ‘keypoints’ was not declared in this scope
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.cpp:133:9: error: ‘descriptors’ was not declared in this scope
         descriptors.resize(128 * num_features);
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/sift_gpu_wrapper.cpp:141:5: error: ‘keypoints’ was not declared in this scope
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/misc.cpp:370:40: error: ‘string’ does not name a type
 FeatureDetector* createDetector( const string& detectorType ) 
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/misc.cpp:370:48: error: ISO C++ forbids declaration of ‘detectorType’ with no type [-fpermissive]
 FeatureDetector* createDetector( const string& detectorType ) 
/home/nuno/AIMAVProject_ws/src/rgbdslam_v2/src/misc.cpp: In function ‘cv ...
2016-04-30 21:12:47 -0600 received badge  Nice Question (source)
2016-04-18 10:07:37 -0600 received badge  Nice Question (source)
2015-08-28 13:48:57 -0600 commented question How to integrate tesseract-ocr library in a catkin project?

Hi @sirasistant! As @Kishore Kumar, I'm also interested in your findings. Are you able to share what you have right now? Thanks in advance!

2015-05-17 01:31:17 -0600 received badge  Good Question (source)
2015-05-03 07:36:58 -0600 received badge  Famous Question (source)
2015-04-20 11:45:08 -0600 commented question Low frequency processing on fovis_ros

@AlexR the problem was never solved!

2015-04-19 21:27:33 -0600 received badge  Nice Question (source)
2015-04-14 09:40:53 -0600 received badge  Nice Question (source)
2015-03-18 12:49:58 -0600 commented answer RGBDSLAM Indigo support

@kbalisciano just the VoxelGrid? What FeatureExtractor and Matcher are you using? ORB? BTW, what's the Ubuntu release? Are you working with drones, too?

2015-03-18 12:36:19 -0600 commented answer RGBDSLAM Indigo support

Have some problems also on my Laptop with it (core i7, 32GB RAM, SSD, ...), but running on a VM (don't know if it makes substancial difference). Well you say it handles well, can you please share your RGBDSLAM files with the optimizations? My e-mail is: n.marques21<at>hotmail<dot>com. Thanks!

2015-03-18 11:27:25 -0600 commented answer RGBDSLAM Indigo support

"works fine" is note really the case, maybe you mean "compiles fine".Last year, for my thesis, I was also able to compile in ROS Hydro, but is not really usable. Transform publishing rates are unberable (below 0.1Hz) and not really usable for the purpose of my thesis (vision-based navigation).

2015-03-18 10:22:19 -0600 commented question RGBDSLAM Indigo support

No luck @kbalisciano. I started using other frameworks since RGBDSLAM stagnated in the Hydro version.

2015-02-24 10:21:37 -0600 received badge  Popular Question (source)
2015-02-24 10:21:37 -0600 received badge  Notable Question (source)
2015-02-24 10:21:37 -0600 received badge  Famous Question (source)
2015-01-31 13:21:38 -0600 received badge  Famous Question (source)
2015-01-21 12:39:34 -0600 commented question Low frequency processing on fovis_ros

Hi @BenediktHeck, not at all! I presented the problem in the github repo of fovis and the dev team didn't apply the enough effort to solve the problem. So, I've left this framework and advanced to another. I advise to do the same

2014-11-21 13:58:21 -0600 received badge  Notable Question (source)
2014-10-24 01:16:48 -0600 received badge  Famous Question (source)
2014-10-15 15:01:12 -0600 received badge  Popular Question (source)
2014-09-11 07:05:27 -0600 asked a question RGBDSLAM - Slow transform pub‏lishing

Hi there,

I have the RGBD SLAM v2 package ( ) running on a Core i7 computer with a Nvidia GTX 780 graphics card. The computer runs Ubuntu 12.04.5 and ROS Hydro.

I want to use the package for robot localization, but it seems like I'm getting a too low pub rate of the transform.

In my launch file I have:

<node pkg="tf" type="static_transform_publisher" name="map_to_local_origin" output="screen" args="0 0 0 0 0 0 /map /local_origin 10" />
<node pkg="tf" type="static_transform_publisher" name="mav_to_camera" output="screen" args="-0.05 0.15 0.8 0 1.570796 0 vision camera_link 10" />

which are transforms that I need to exist for sending the pose estimation for the robot. I also have:

 <!-- TF information settings -->
    <param name="config/fixed_frame_name"              value="/map"/>
    <param name="config/ground_truth_frame_name"       value=""/><!--empty string if no ground truth-->
    <param name="config/base_frame_name"               value="vision"/> <!-- /openni_camera for hand-held kinect. For robot, e.g., /base_link -->
    <param name="config/fixed_camera"                  value="false"/> <!--is the kinect fixed with respect to base, or can it be moved (false makes sense only if transform betw. base_frame and openni_camera is sent via tf)-->

where 'vision' is considered the base of the robot. 'till now everything is ok. The problem comes when I run the package and start processing. Using the tf view_frames, I can check the following on the tf tree:

Broadcaster: /rgbdslam
Average rate: 10000.000 Hz
Most recent transform: 1409946186.276
Buffer length: 0.000 sec

which for sure is not right, given that as said in the tf transform is being sent at a 10Hz rate supposedly, which I know it can be less, but less can't be what is in the following: (Issuing 'rosrun tf tf_monitor')

Node: /rgbdslam 0.202653 Hz, Average Delay: 1.97535 Max Delay: 11.3705

Which is extremely slow! Does someone has a tip for what is happening or why is it so slow?

Thanks in advance!

2014-08-27 12:29:53 -0600 received badge  Famous Question (source)
2014-08-26 13:45:29 -0600 commented answer Is there a way to save all rosbag data into a .csv or text file ?

Hi there! Is the other way around possible. i.e., convert from a .txt to a .bag? Thanks in advance!

2014-08-17 03:24:43 -0600 received badge  Famous Question (source)
2014-08-08 11:13:46 -0600 received badge  Nice Answer (source)
2014-08-08 11:12:41 -0600 received badge  Notable Question (source)
2014-08-08 11:12:41 -0600 received badge  Famous Question (source)
2014-08-03 20:52:19 -0600 received badge  Famous Question (source)
2014-08-01 09:45:33 -0600 asked a question Low frequency processing on fovis_ros

Hi there,

I'm trying mono_depth_odometer with a changed topic: /camera/rgb/image_rect_color instead of /camera/rgb/image_rect_mono, since I'm using an Asus Xtion Pro Live with OpenNI2 and can't get any mono topic.

The thing is I have a Haswell i7 3.4Ghz computer running Ubuntu 12.04 with ROS Hydro and still getting a 2/3 hz tf published! :S when echoing the pose I also see that the command line echo log seems very slow.

Here's the tf tree:

image description

Is there a possible explanation to this? Supposedly this is a "fast" odometry framework, and it's revealing to be slower than the other packages I have here. I'll be using this in a MAV with and Embedded ARM, but first want it to test it in my laptop. If I get this results on my laptop I will probably get even worse in a ARM platform.

Thanks in advance!

2014-07-23 15:58:07 -0600 received badge  Notable Question (source)
2014-07-14 02:56:57 -0600 received badge  Notable Question (source)
2014-07-14 02:46:55 -0600 received badge  Famous Question (source)
2014-07-14 02:42:17 -0600 received badge  Popular Question (source)
2014-07-10 13:21:50 -0600 received badge  Notable Question (source)
2014-07-10 05:34:49 -0600 received badge  Popular Question (source)
2014-07-09 07:25:21 -0600 commented question Convert /tf to nav_msgs/Odometry

0.2 , 0.2 , 999999 , 999999 , 999999 , 0.2 doesn't seem Gaussian but ok. Anyway, giving that I need a 6DOF motion, give a 0.2 instead of 999999 to other values seams reasonable right?

2014-07-09 06:58:33 -0600 commented question Convert /tf to nav_msgs/Odometry

Maybe a normal distribution for each matrix value?

2014-07-09 06:17:14 -0600 commented answer combine visual odometry data from ccny_rgbd with encoders/IMU

Hi @goetz.marc does this fix also works for a case where I have a 6DOFmotion, like a UAV? Can you please tell me what are each matrix diagonal values? The first two are cov_x, cov_y and the last are rcov_z. What about the others? Are they cov_z, rcov_x, rcov_y?

2014-07-09 06:12:30 -0600 commented question Convert /tf to nav_msgs/Odometry

@dornhege can you please point me some code sample to hard code a good covariance matrix? I'm following the dirty fix in: But it doesn't seem to be good in my case (since I will have 6DOF motion). Tkx

2014-07-09 06:06:46 -0600 marked best answer /tf from /base_link to /camera_link


I'm using ccny_rgbd_tools which gives me a transform between /camera_link, which is the base_frame, and /odom, which is the fixed_frame.

I'm using MoveIt!, where I have a urdf model which as a camera_link and a base_link, which supposedly already gives a static_tf between them. I have a virtual_joint between camera_link and odom (which is floating, cause it is from a quadrotor).

Now running demo.launch (without the static_transform_publisher between camera_link and odom) I get this on the command line:

[ WARN] [1404846265.951041568]: No transform available between frame 'odom' and planning frame '/base_link' (Could not find a connection between 'base_link' and 'odom' because they are not part of the same tree.Tf has two or more unconnected trees.)

I have a /tf between odom and camera_link, and a /tf from base_link to camera_link, which builds up a tree. Why does it needs a /tf between base_link and odom?

Thanks in advance!

2014-07-09 06:06:46 -0600 edited answer /tf from /base_link to /camera_link

ccny pkg is working properly. But now its working on Rviz also.

What I did was use robot_pose_ekf that receives the data from /vo and does the tf to base_footprint. Added a static_transform between the base_footprint and base_link and now I have the URDF position being updated in Rviz.

It kinda lags and sometimes has some jumps. Will probably have to add the IMU info also. Anyway, I would like to try the hector_localization package also to fuse other sensors, but I'm facing some difficulties due to to lack of documentation!