ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

charles.fox's profile - activity

2021-08-01 03:21:11 -0500 received badge  Notable Question (source)
2021-08-01 03:21:11 -0500 received badge  Famous Question (source)
2020-06-17 00:16:37 -0500 received badge  Necromancer (source)
2020-06-11 08:40:07 -0500 received badge  Famous Question (source)
2020-04-11 05:11:53 -0500 received badge  Famous Question (source)
2020-02-12 16:55:16 -0500 received badge  Popular Question (source)
2020-02-12 16:55:16 -0500 received badge  Notable Question (source)
2020-02-12 16:55:16 -0500 received badge  Famous Question (source)
2018-09-24 12:13:42 -0500 marked best answer robot_localization can't locate node - jade upgrade

Hi I have recently upgraded from indigo to jade, and installed (in ubuntu) the jade robot_localization package via apt.

When I do

roslaunch robot_localization  ekf_template.launch

I get

ERROR: cannot launch node of type [robot_localization/ekf_localization_node]: can't locate node [ekf_localization_node] in package [robot_localization]

or if I do

rosrun robot_localization ekf_localization_node

I get

[rosrun] Couldn't find executable named ekf_localization_node below /opt/ros/jade/share/robot_localization

I can see the package is installed, calization launch$ ls /opt/ros/jade/share/robot_localization/ cmake launch LICENSE package.xml srv

and it shows in rospack list.

Any ideas?

2018-04-02 08:20:01 -0500 commented question ROS callback not working from OpenGL spinOnce (glut)

I'm guessing it's some kind of thread scoping issue, with the glut thread not able to see the same message queue as the

2018-04-02 08:00:27 -0500 commented question ROS callback not working from OpenGL spinOnce (glut)

especially weird is that it works if I call spinOnce myself the main loop like the below (but then I get no graphics).

2018-04-02 08:00:20 -0500 commented question ROS callback not working from OpenGL spinOnce (glut)

especially weird is that it works if I call spinOnce myself the main loop like the below (but then I get no graphics).

2018-04-02 07:58:52 -0500 commented question ROS callback not working from OpenGL spinOnce (glut)

especially weird is that it works if I call spinOnce myself the main loop like the below (but then I get no graphics).

2018-04-02 07:57:29 -0500 answered a question ROS callback not working from OpenGL spinOnce (glut)

especially weird is that it works if I call spinOnce myself the main loop like the below (but then I get no graphics).

2018-04-02 07:54:50 -0500 edited question ROS callback not working from OpenGL spinOnce (glut)

ROS callback not working from OpenGL (glut) I'm trying to write a simple OpenGL/glut viewer program which subscribes to

2018-04-02 07:53:49 -0500 asked a question ROS callback not working from OpenGL spinOnce (glut)

ROS callback not working from OpenGL (glut) I'm trying to write a simple OpenGL/glut viewer program which subscribes to

2017-12-27 19:02:19 -0500 received badge  Famous Question (source)
2017-05-25 08:30:05 -0500 received badge  Nice Question (source)
2017-05-09 23:18:48 -0500 received badge  Necromancer (source)
2017-05-09 23:18:48 -0500 received badge  Teacher (source)
2017-04-19 08:09:14 -0500 received badge  Notable Question (source)
2017-04-13 11:12:14 -0500 received badge  Popular Question (source)
2017-04-13 08:04:41 -0500 asked a question Self-drive car PhD studentship - ROS - Leeds UK - fully funded

May be of interest to members of this forum -- we have money available to pay someone to work with - and maybe on - ROS - around self-driving cars. Applications from ROS forum members with strong track records of being helpful are particularly welcome.


Fully-funded PhD available - University of Leeds, UK

SELF-DRIVING CARS – robotics, machine vision, human interactions

https://lnkd.in/g7PeKZ9

Comes with enhanced salary, training grant expenses, access to vehicles, data and software. (Funding is for UK/EU/EEA students only)

Self-driving pods are small, electric, shared, autonomous person transporter vehicles which may form a solution to “last mile” transportation, with commuters travelling by train into a city then transferring to a pod to reach their workplace. Such vehicles have recently been used for demonstrations in a number of cities.

We have previously worked with self-driving minibus-like vehicles in the EU CityMobil2 project, including delivering road user detection systems ( www.citymobil2.eu/en/ ). We also led the InnovateUK IBEX2 autonomous agriculture vehicle project, delivering localisation, planning and vision systems ( www.ibexautomation.co.uk ).

As part of newly funded projects in this area, there is currently full funding available for a PhD position to extend the work in related areas including:

1) Construction of new hardware and software systems for pod vehicles. Using SLAM algorithms, lidar and machine vision sensors, mapping and planning systems; which may also include mechanical and electronics work on the vehicles themselves;

2) Use of the pod to study live and recorded human-vehicle interactions with pedestrians and other road users. Machine vision methods for road user detection and novel scene analysis / event detection and prediction algorithms to understand road user behaviour and potential safety threats, such as who is likely to pull out in front of the vehicle. This work may include models from Psychology such as crowd behaviour and game theory, as well as agent-based micro-simulation, and statistical / big-data analysis of results.

ENTRY REQUIREMENTS:

Required skills (to be evidenced by CV, references and interview): - Very strong programming and applied maths skills; (e.g. from industry experience, portfolio, open source contributions, or academic qualifications) - Knowledge and experience of robotics, machine vision, and/or human behaviour /game theory/ HCI interaction modelling (e.g. from academic qualifications, industry experience, portfolio, or open source contributions) - Ability to work well in a technical team (e.g. from strong recommendation letters, social networks, or documented open source history)

Desirable/optional skills: - Experience of publishing academic or industrial papers, or open-source projects - Linux, Python, C++, ROS, OpenCV, git, PCL, SQL, SGE/Hadoop/Spark - Understanding the behaviour of pedestrians or crowds on the road - Electronics, mechanics, vehicle maintenance, DIY, welding, Arduino, hardware hacking

Further information about entry requirements can be found here: http://www.its.leeds.ac.uk/courses/ph...

HOW TO APPLY:

Please send a CV and a short ‘statement of motivation’ to Dr Charles Fox ( C.W.Fox@leeds.ac.uk ). Further information will then be provided. Dr Fox is also available for informal consultation if you would like to find out more ... (more)

2017-03-22 06:58:10 -0500 received badge  Famous Question (source)
2016-10-30 03:49:47 -0500 received badge  Notable Question (source)
2016-10-30 03:49:47 -0500 received badge  Famous Question (source)
2016-10-02 13:58:33 -0500 received badge  Notable Question (source)
2016-09-20 09:56:54 -0500 marked best answer navsat_transform_node has zero orientation output?

I'm working with a robot with no wheel odometry measures, and have IMU, GPS and motor commands to fuse. Can anyone clarify how the setup should work for this?

At the moment I have GPS, IMU and motor command odometry going into a navsat_transform_node, which outputs odometry messages. I'm planning to fuse them with a second copy of the motor commands with an ekf_localization node.

My problem is: the odometry/gps messages coming out of the navsat_transform_node all contain zeros for all orientations. I thought the IMU data was being used to set these (and I've check the incoming IMU messages are non-zero).

Am I doing something wrong, or mis-interpreting what navsat_transform_node is doing? Perhaps the IMU is used like the motor command odometry to set only the initial state, then ignored the rest of the time? Even with that, it would be unlikely to have orientation exactly 0 at the start. Do I still need to fuse the odometry/gps with IMU again to get the desired effect, and if so why?

Thanks for any insights!

Sample odometry/gps message below:

position:

  x: 0.806593844667
  y: -4.15517381765
  z: -0.674990490428
orientation: 
  x: 0.0
  y: 0.0
  z: 0.0
  w: 1.0

covariance: [1.259293318748, 0.03294127204511593, 0.41883145581259457, 0.0, 0.0, 0.0, 0.03294127204511593, 1.232013681194764, 0.2798927172566257, 0.0, 0.0, 0.0, 0.41883145581259457, 0.2798927172566257, 4.768693000057238, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] twist: twist: linear: x: 0.0 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

2016-09-08 09:44:22 -0500 received badge  Famous Question (source)
2016-08-03 06:39:31 -0500 received badge  Famous Question (source)
2016-07-08 23:43:57 -0500 received badge  Famous Question (source)
2016-05-08 14:53:36 -0500 marked best answer hokuyo dies in ros-gazebo, works in gazebo?

I've just upgraded to ROS Jade and Gazebo 5.0 and am having problems with my robot model.

When I take out my lidar node, my world works with both $ gazebo myworld.world and $roslaunch mylaunch.launch where mylaunch wraps myworld.world. But when I add my hoyoyo lidar (as used to work in Indigo), it works with $ gazebo myworld.world but segfaults when I run gazebo through roslaunch. Any ideas?

Here's what I add to sdf for the lidar: <include> <uri>model://hokuyo</uri> <pose>0.45 0 0.45 0 0.15 0</pose> </include> <joint name="hokuyo_joint" type="revolute"> <child>hokuyo::link</child> <parent>chassis</parent> <axis> <xyz>0 0 1</xyz> <limit> <upper>0</upper> <lower>0</lower> </limit> </axis> </joint>

2016-05-08 14:50:55 -0500 received badge  Notable Question (source)
2016-05-08 14:50:55 -0500 received badge  Popular Question (source)
2016-04-12 14:05:39 -0500 received badge  Famous Question (source)
2016-04-12 14:05:39 -0500 received badge  Notable Question (source)
2016-04-04 14:41:10 -0500 received badge  Famous Question (source)
2016-03-17 14:58:49 -0500 received badge  Popular Question (source)
2016-02-05 07:55:24 -0500 received badge  Notable Question (source)
2016-02-04 14:12:44 -0500 marked best answer ekf_localization_node: what frame for IMU?

Hi there, I am trying to fuse GPS with IMU information with ekf_localization_node. For now I have tied by map and odom frames to be always the same, so I assume that GPS is giving absolute map positions, and report them in the map frame. I am confused about the IMU though: it's heading estimate should be in the map frame as it's absolute, but it's motion estimates should be in the base_link frame because they are relative to the robot. The imu message only has a single frame_id, so which should I set it to? Or do I perhaps need to send two copies of the imu message with two different frames there, and set up ekf_localization_node to only attend to the relevant measurements in each?

2016-01-13 09:16:18 -0500 received badge  Famous Question (source)