ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

mindthomas's profile - activity

2023-01-18 03:26:56 -0500 received badge  Nice Answer (source)
2022-09-27 13:50:16 -0500 received badge  Taxonomist
2019-05-20 01:30:22 -0500 marked best answer URDF model frames not published properly by robot_state_publisher

Hi.

I'm struggling with getting a URDF (or XACRO file to be exact) working with the robot_state_publisher. I know that it has been working when I made the file originally, but after updating and getting back to this project I simply can not get the robot_state_publisher to publish all the frames correctly.

You find the XACRO file as part of my repository here: https://github.com/mindThomas/JetsonC...

Try and clone the repository and build it with catkin and then run the rviz launch file. Or you can try and just run the robot_state_publisher on the xacro file output. Running a TF view_frames on the published frames gives the following incorrect result:

image description

It seems like the problem is related to the revolute or continuous joints. At least if I replace these joints with fixed joints the frames are published correctly.

PS. I'm running ROS Kinetic on Ubuntu 16.04.

Best regards, Thomas Jespersen

2018-09-23 22:35:50 -0500 received badge  Famous Question (source)
2018-09-07 12:30:10 -0500 received badge  Famous Question (source)
2018-07-16 04:21:12 -0500 received badge  Famous Question (source)
2018-06-26 10:03:14 -0500 received badge  Notable Question (source)
2018-06-01 12:00:06 -0500 received badge  Notable Question (source)
2018-03-27 11:06:10 -0500 edited question RVIZ: Wheel transforms not turning when using joint state controllers

RVIZ: Wheel transforms not turning when using joint state controllers I have an RC car in Gazebo made up of several link

2018-03-27 11:05:07 -0500 commented question RVIZ: Wheel transforms not turning when using joint state controllers

So you mean that it is working as intended? You don't see the problem as shown in the screencaptures?

2018-03-27 11:04:32 -0500 received badge  Popular Question (source)
2018-03-26 03:52:56 -0500 received badge  Student (source)
2018-03-24 10:20:46 -0500 received badge  Popular Question (source)
2018-03-24 06:41:35 -0500 received badge  Self-Learner (source)
2018-03-24 06:41:35 -0500 received badge  Teacher (source)
2018-03-24 06:41:34 -0500 marked best answer Gazebo model stuck in ground after enabling joint controllers

Hi there.

I am struggling with getting my RC Car model to work properly with Gazebo. I have followed both the Gazebo and ROS tutorial on how to create the URDF (XACRO) files and the required Gazebo launch file. With just the joint state publisher I am able to view the model correctly in RVIZ. But whenever I include the joint controllers and spawn the model in Gazebo it will just spawn within the ground and without any wheel.

I have uploaded all the files to the following Git repository: https://github.com/mindThomas/JetsonC...

Any clues about what could be wrong? You can try and run the Gazebo model and see for yourself:

roslaunch jetsoncar_description gazebo.launch

Thank you in advance.

Best regards, Thomas Jespersen

2018-03-24 06:21:43 -0500 asked a question RVIZ: Wheel transforms not turning when using joint state controllers

RVIZ: Wheel transforms not turning when using joint state controllers I have an RC car in Gazebo made up of several link

2018-03-24 06:12:13 -0500 edited answer Gazebo model stuck in ground after enabling joint controllers

I have solved the problem. The problem was related to some inertias being too small, probably causing numerical issues

2018-03-24 06:12:13 -0500 received badge  Editor (source)
2018-03-24 06:11:54 -0500 edited answer Gazebo model stuck in ground after enabling joint controllers

I have solved the problem. The problem was related to some inertias being too small, probably causing numerical issues

2018-03-24 06:11:11 -0500 answered a question Gazebo model stuck in ground after enabling joint controllers

I have solved the problem. The problem was related to some inertias being too small, probably causing numerical issues

2018-03-18 19:50:53 -0500 commented answer Gazebo model stuck in ground after enabling joint controllers

Thanks for your suggestion. Unfortunately even changing all controllers to be just JointVelocityController does not solv

2018-03-18 11:07:12 -0500 asked a question Gazebo model stuck in ground after enabling joint controllers

Gazebo model stuck in ground after enabling joint controllers Hi there. I am struggling with getting my RC Car model to

2018-03-17 04:59:04 -0500 received badge  Notable Question (source)
2018-03-17 04:04:12 -0500 answered a question URDF model frames not published properly by robot_state_publisher

Problem has been solved by uninstalling/removing Anaconda. I don't know how Anaconda managed to mess up the runtime exec

2018-03-13 09:10:33 -0500 commented question URDF model frames not published properly by robot_state_publisher

@delb If I try to run rosrun rqt_tf_tree rqt_tf_tree I get a segmentation fault. @gvdhoorn You are supposed to see both

2018-03-13 09:10:22 -0500 commented question URDF model frames not published properly by robot_state_publisher

@delb If I try to run rosrun rqt_tf_tree rqt_tf_tree I get a segmentation fault. @gdvhoorn You are supposed to see both

2018-03-13 09:10:13 -0500 commented question URDF model frames not published properly by robot_state_publisher

@delb If I try to run rosrun rqt_tf_tree rqt_tf_tree I get a segmentation fault. @gdvhoorn You are supposed to see both

2018-03-13 02:03:32 -0500 received badge  Popular Question (source)
2018-03-12 10:54:40 -0500 commented question URDF model frames not published properly by robot_state_publisher

Yes, agreed. But why doesn't the robot_state_publisher handle these missing ones (attached to the shafts) when the node

2018-03-12 10:14:06 -0500 asked a question URDF model frames not published properly by robot_state_publisher

URDF model frames not published properly by robot_state_publisher Hi. I'm struggling with getting a URDF (or XACRO file

2017-05-14 23:19:53 -0500 received badge  Famous Question (source)
2017-04-20 03:38:27 -0500 received badge  Popular Question (source)
2017-04-20 03:38:27 -0500 received badge  Notable Question (source)
2017-04-05 02:41:45 -0500 received badge  Enthusiast
2017-03-31 11:55:37 -0500 asked a question SLAM Algorithm to combine IMU measurements with 2D image or Point Cloud?

Hi all.

We are a research group at Aalborg University currently investigating the use of SLAM for indoor positioning of drones. We have decided to use ROS to test out different implementations and develop our own algorithm+controller.

The short version of our question is:

Which SLAM algorithm do you recommend for localization using a 2D camera or point cloud data from a depth sensor (RealSense) while also including other sensor measurements (IMU + GPS)


The detailed explanation and reason for this question can be seen below. In our search for previous work with SLAM, where the primary goal is real-time localization (pose estimation), we have not been able to find that much. For real-time localization a lot of work has been put into EKF-SLAM and FastSLAM, but mainly focused on 2D/planar navigation using LiDAR sensors Otherwise it seems like a lot of research, especially when using other sensors such as camera and RGBD sensors, is put into the mapping portion of SLAM.

In our case we want to focus on the pose estimation and want to have as realiable and robust a position as possible. We would also like to include other sensor information such as attitude estimates (roll, pitch, yaw) from the drone flight controller into the SLAM problem to enhance the positioning. Furthermore we also have an indoor positioning system capable of delivering a position measurement with a slow rate (~1 Hz) but with quite noisy measurement and with a sensor that is likely to dropout in certain areas of the building. Hence we would like to incorporate these measurements as well but not only rely on these why we are investigating SLAM in the first place.

Can any of you suggest any previous SLAM work or suggest any usefull paths for us to investigate?

At the moment our plan is defined as follows:

  • Extend FastSLAM to support features in 3D space and estimate 6D pose
  • Use either a 2D camera or the point cloud output of a 3D depth sensor (RealSense, similar to Kinect).
  • Investigate how other sensor information can be incorporated into FastSLAM

But is this a reasonable plan at all?


Best regards,

Thomas Jespersen