ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

WillAndrew's profile - activity

2019-10-21 14:09:22 -0500 received badge  Famous Question (source)
2019-02-20 15:24:33 -0500 received badge  Notable Question (source)
2018-07-04 15:24:44 -0500 commented answer Naming log folder

That sounds like it will work. Seems a shame that there isn't a more elegant (in-built) solution as it's quite an obviou

2018-07-04 15:23:09 -0500 marked best answer Naming log folder

Hi everyone; Is there any way to specify in my python program the name of the folder to which logs for respective nodes are saved?

As in, when I launch my experiment via roslaunch, I'll see something like the following text:

... logging to /home/user/catkin_ws/src/project/logs/51342e8a-7f6c-11e8-a056-5404a63cee0d/roslaunch-user-desktop-22693.log

My question is whether I can programatically set the name of the folder 51342e8a-7f6c-11e8-a056-5404a63cee0d each time I run the experiment? Ideally the folder name would be the exact date/time the experiment was launched..

2018-07-04 14:58:01 -0500 received badge  Popular Question (source)
2018-07-04 05:10:33 -0500 edited question Naming log folder

Naming log folder name Hi everyone; Is there any way to specify in my python program the name of the folder to which log

2018-07-04 04:52:53 -0500 asked a question Naming log folder

Naming log folder name Hi everyone; Is there any way to specify in my python program the name of the folder to which log

2018-07-01 21:49:40 -0500 received badge  Taxonomist
2018-05-02 05:05:50 -0500 marked best answer Simulated 3-axis camera gimbal for UAV in gazebo

Hi there,

I'm currently trying to simulate a "perfect" (no friction) 3-axis camera gimbal attached on the bottom of a UAV. I'm using the hector quadrotor gazebo/ROS project to try and achieve this.

The ultimate goal is that the simulated image plane is continuously perpendicular with the world plane.

In order to achieve this, I've attached a simulated camera to 3 orthogonal, continuous joints via links with no length (see the URDF/xacro below). However, the image feed produced is still as if the camera were permanently affixed to it.

I'm fairly new to URDF/xacro/etc. so please excuse me if this is just a basic misunderstanding on my part!

Here's the gimbal URDF/xacro file I've created:

<?xml version="1.0"?>

<robot xmlns:xacro="http://www.ros.org/wiki/xacro">

  <xacro:property name="M_PI" value="3.1415926535897931" />

  <xacro:macro name="gimbal_camera" params="name parent *origin ros_topic cam_info_topic update_rate res_x res_y image_format hfov">

    <!-- Gimbal attached to UAV -->
    <joint name="gimbal_joint" type="fixed">
        <xacro:insert_block name="origin" />
        <parent link="${parent}" />
        <child link="gimbal_r_link" />
    </joint>

    <link name="gimbal_r_link">
      <inertial>
        <mass value="0" />
        <inertia ixx="0" ixy="0" ixz="0" iyy="0" iyz="0" izz="0" />
      </inertial>
    </link>

    <joint name="gimbal_r_joint" type="continuous">
        <parent link="gimbal_r_link" />
        <child link="gimbal_p_link" />
        <axis xyz="1 0 0" />
        <dynamics damping="0.0" friction="0.0" />
    </joint>

    <link name="gimbal_p_link">
      <inertial>
        <mass value="0" />
        <inertia ixx="0" ixy="0" ixz="0" iyy="0" iyz="0" izz="0" />
      </inertial>
    </link>

    <joint name="gimbal_p_joint" type="continuous">
        <parent link="gimbal_p_link" />
        <child link="gimbal_y_link" />
        <axis xyz="0 1 0" />
        <dynamics damping="0.0" friction="0.0" />
    </joint>

    <link name="gimbal_y_link">
      <inertial>
        <mass value="0" />
        <inertia ixx="0" ixy="0" ixz="0" iyy="0" iyz="0" izz="0" />
      </inertial>
    </link>

    <joint name="gimbal_y_joint" type="continuous">
        <parent link="gimbal_y_link" />
        <child link="${name}_link" />
        <axis xyz="0 0 1" />
        <dynamics damping="0.0" friction="0.0" />
    </joint>

    <link name="${name}_link">
    </link>

    <joint name="${name}_optical_joint" type="fixed">
      <origin xyz="0 0 0" rpy="${-M_PI/2} 0.0 ${-M_PI/2}" />
      <parent link="${name}_link" />
      <child link="${name}_optical_frame"/>
    </joint>

    <link name="${name}_optical_frame">
    </link>

    <gazebo reference="gimbal_r_link">
      <sensor type="camera" name="${name}_camera_sensor">
        <update_rate>${update_rate}</update_rate>
        <camera>
          <horizontal_fov>${hfov * M_PI/180.0}</horizontal_fov>
          
          <clip>
            <near>0.01</near>
            <far>100</far>
          </clip>
        </camera>

        <plugin name="${name}_camera_controller" filename="libgazebo_ros_camera.so">
          <cameraName>${name}</cameraName>
          <imageTopicName>${ros_topic}</imageTopicName>
          <cameraInfoTopicName>${cam_info_topic}</cameraInfoTopicName>
          <frameName>${name}_optical_frame</frameName>
        </plugin>
      </sensor>
    </gazebo>
  </xacro:macro>

</robot>

Which is instantiated in the following file:

<?xml version="1.0"?>

<robot name="quadrotor" xmlns:xacro="http://www.ros.org/wiki/xacro">

    <xacro:property name="M_PI" value="3.1415926535897931" />

    <!-- Included URDF Files -->
    <xacro:include filename="$(find hector_quadrotor_description)/urdf/quadrotor_base.urdf.xacro" />
    <xacro:include filename="$(find hector_sensors_description)/urdf/gimbal_camera.urdf.xacro" />

    <!-- Instantiate quadrotor_base_macro once (has no parameters atm) -->
    <xacro:quadrotor_base_macro />

    <!-- Downward facing camera attached to gimbal -->
    <xacro:gimbal_camera name="downward_cam" parent="base_link" ros_topic="camera/image" cam_info_topic="camera/camera_info ...
(more)
2018-04-11 10:59:03 -0500 received badge  Famous Question (source)
2018-04-11 10:59:03 -0500 received badge  Notable Question (source)
2017-09-04 03:48:17 -0500 received badge  Popular Question (source)
2017-09-01 08:43:55 -0500 asked a question Simulated 3-axis camera gimbal for UAV in gazebo

Simulated 3-axis camera gimbal for UAV in gazebo Hi there, I'm currently trying to simulate a "perfect" (no friction)

2016-09-19 16:54:20 -0500 received badge  Famous Question (source)
2016-08-08 04:45:28 -0500 received badge  Notable Question (source)
2016-08-04 05:27:15 -0500 answered a question py-faster-rcnn network detection runs on CPU through ROS callback function

Solved the issue by having the callback function change a global variable which is checked by the main loop/thread as follows:

while not rospy.is_shutdown():
    if run_test_img == True:
        testImg()
    rate.sleep()

A bit hacky but it does the job.

2016-08-04 05:23:53 -0500 commented question Callback with GPU processing

You can ignore my comment/question, I completely forgot that you can do the following while not rospy.is_shutdown(). Thanks for your help

2016-08-04 04:21:13 -0500 commented question Callback with GPU processing

Are you achieving this in Python? I'd like to implement the same but I'm not sure how to check a global variable in the main thread once rospy.spin() is invoked. Plus there's no rospy equivalent of spinOnce(). I may end up having to write a C++ wrapper which I'd ideally like to avoid.

2016-08-04 04:13:25 -0500 received badge  Popular Question (source)
2016-08-03 09:56:38 -0500 commented question Callback with GPU processing

Hello, did you ever find a solution to this problem as I'm experiencing the same issue

2016-08-03 06:48:09 -0500 received badge  Editor (source)
2016-08-03 06:45:58 -0500 asked a question py-faster-rcnn network detection runs on CPU through ROS callback function

Hi there,

I'm trying to test my trained py-faster-rcnn network for object detection through ROS. I have a node running with many similarities to demo.py. Included in the class are commands to execute the network detection on the GPU (given below), however, looking at recall times (~26s) and the system profiler it's fairly clear that the network is running on the CPU. Is there a way to get around this?

caffe.set_mode_gpu()
caffe.set_device(0)
cfg.GPU_ID = 0

Interestingly, if I run demo.py normally (not through ROS), it executes on the GPU in ~2s

UPDATE: after some more detective work, it appears that it is an issue with calling CNN detection within a callback function. Any suggestions? (Other people seem to be experiencing a similar problem - Callback with GPU processing)

Thanks, Will

2015-07-29 02:57:36 -0500 received badge  Famous Question (source)
2015-07-07 03:24:02 -0500 received badge  Enthusiast
2015-07-06 09:23:28 -0500 commented answer CvBridge conversion problem: Asus Xtion depth image to OpenCV

Works really well, thanks!

2015-07-06 03:31:12 -0500 received badge  Scholar (source)
2015-07-06 03:30:12 -0500 commented answer CvBridge conversion problem: Asus Xtion depth image to OpenCV

"encoding: 32FC1" for the Xtion

2015-07-04 13:34:44 -0500 received badge  Notable Question (source)
2015-07-03 05:21:47 -0500 received badge  Popular Question (source)
2015-07-03 03:43:15 -0500 commented answer CvBridge conversion problem: Asus Xtion depth image to OpenCV

Thanks! but both of your suggestions didn't work (32FC1 resulted in the same black&white image, 16UC1 was just completely black..)

2015-07-02 07:52:16 -0500 commented answer Format of depth value openni2_camera

Thanks! this fixed the issues I've been having

2015-07-02 06:31:51 -0500 asked a question CvBridge conversion problem: Asus Xtion depth image to OpenCV

Hi,

I have a Asus Xtion sensor being driven by the Openni2_launch package. I subscribe to the "/camera/depth/image" topic and attempt to convert the ImageConstPtr& data type to an OpenCV Mat in a callback function utilising the cv_bridge package. Visualising the "/camera/depth/image" ROS topic in RQT, I get a high quality depth image as expected:

image description

However I can't seem to find an image type encoding (e.g. "8UC1") which OpenCV/cv_bridge accepts which results in an image similar to the one above. As an example, with the encoding "32FC1", this is the type of quality I get. I'm not entirely sure what I'm doing wrong!

image description

And here's the source code

// Callback function for time-synchronised RGB and depth images
void sync_callback(const sensor_msgs::ImageConstPtr& rgb, const sensor_msgs::ImageConstPtr& depth)
{
    // Convert ROS images to OpenCV
    cv::Mat rgb_image;
    cv::Mat depth_image;

    try
    {
        rgb_image = cv_bridge::toCvShare(rgb, "mono8") -> image;
        depth_image = cv_bridge::toCvShare(depth, "32FC1") -> image;
    }
    catch (cv_bridge::Exception& e)
    {
        ROS_ERROR("Could not convert from ROS images to OpenCV type: %s", e.what());
    }

Thanks in advance!

2015-06-24 07:00:50 -0500 received badge  Supporter (source)