ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

logan.ydid's profile - activity

2021-02-18 22:45:54 -0500 received badge  Famous Question (source)
2020-07-31 05:06:59 -0500 received badge  Notable Question (source)
2020-06-25 11:47:13 -0500 received badge  Taxonomist
2020-06-17 16:53:03 -0500 received badge  Famous Question (source)
2020-05-29 17:55:28 -0500 received badge  Famous Question (source)
2020-05-22 06:01:57 -0500 marked best answer Adding a Kinect to my Gazebo simulation

Hello All,

I have a URDF model that is working in gazebo that i copied from the book Programming Robots with ROS. I would like to add a Kinect sensor to this model so i can simulate my robot but i cant for the life of me figure out how do this. Most resources that i found are like this example. which says that i should use the model found in the turtlebot_description folder but i am unsure what files i need and how exactly to import them into my URDF. I found this tutorial which edits the kinect model found in the gazebo models folder, and this worked perfectly so i know its possible. I think what i need is for someone to really spell it out for me what i need to add to my urdf and my launch file to add a kinect.

Here is my URDF I would like to replace the hokuyo_link with a Kinect:

<?xml version="1.0"?>
<robot name="tortoisebot">
    <link name="base_link">
        <visual>
            <geometry>
                <box size="0.6 0.3 0.3"/>
            </geometry>
            <material name="silver">
                <color rgba="0.75 0.75 0.75 1"/>
            </material>
        </visual>
        <collision>
            <geometry>
                <box size="0.6 0.3 0.3"/>
            </geometry>
        </collision>
        <inertial>
            <mass value="1.0"/>
            <inertia ixx="0.015" iyy="0.0375" izz="0.0375" ixy="0" ixz="0" iyz="0"/>
        </inertial>
    </link>

    <link name="front_caster">
        <visual>
            <geometry>
                <box size="0.1 0.1 0.3"/>
            </geometry>
            <material name="silver"/>
        </visual>
        <collision>
            <geometry>
                <box size="0.1 0.1 0.3"/>
            </geometry>
        </collision>
        <inertial>
            <mass value="0.1"/>
            <inertia ixx="0.00083" iyy="0.00083" izz="0.000167" ixy="0" ixz="0" iyz="0"/>
        </inertial>
    </link>
    <joint name="front_caster_joint" type="continuous">
        <axis xyz="0 0 1"/>
        <parent link="base_link"/>
        <child link="front_caster" />
        <origin rpy="0 0 0" xyz="0.3 0 0" />
    </joint>

    <link name="front_wheel">
        <visual>
            <geometry>
                <cylinder length="0.05" radius="0.035"/>
            </geometry>
            <material name="black" />
        </visual>
        <collision>
            <geometry>
                <cylinder length="0.05" radius="0.035"/>
            </geometry>
        </collision>
        <inertial>
            <mass value="0.1"/>
            <inertia ixx="5.1458e-5" iyy="5.1458e-5" izz="6.125e-5" ixy="0" ixz="0" iyz="0"/>
        </inertial>
    </link>


    <joint name="front_wheel_joint" type="continuous">
        <axis xyz="0 0 1"/>
        <parent link="front_caster"/>
        <child link="front_wheel"/>
        <origin rpy="-1.5708 0 0" xyz="0.05 0 -0.15"/>
    </joint>

        <link name="right_wheel">
                <visual>
                        <geometry>
                                <cylinder length="0.05" radius="0.035"/>
                        </geometry>
                        <material name="black" />
                </visual>
        <collision>
            <geometry>
                                <cylinder length="0.05" radius="0.035"/>
            </geometry>
        </collision>
        <inertial>
            <mass value="0.1"/>
            <inertia ixx="5.1458e-5" iyy="5.1458e-5" izz="6.125e-5" ixy="0" ixz="0" iyz ...
(more)
2020-04-18 04:11:51 -0500 received badge  Famous Question (source)
2020-04-08 05:37:41 -0500 received badge  Popular Question (source)
2020-04-07 01:24:19 -0500 received badge  Notable Question (source)
2019-12-10 03:50:26 -0500 marked best answer point cloud in wrong position relative to robot

Hello fellow roboticists,

I have been trying to get a kinect sensor working on my urdf model in gazebo. i have successfully attached the sensor thanks to my previous question here. My only issue is that the data is being shown in the wrong place in Rviz. There, the point cloud that should be displayed as being in front of the robot is being displayed above it and rotated about the z axis. How shoud i go about fixing this?

Here is an image of what is happening (left is in Gazebo and the right is in Rviz): image description

and here is my urdf model that i am using:

<?xml version="1.0"?>
<robot name="tortoisebot">

  <link name="base_link">
    <visual>
      <geometry>
        <box size="0.6 0.3 0.3"/>
      </geometry>
      <material name="silver">
        <color rgba="0.75 0.75 0.75 1"/>
      </material>
    </visual>
    <collision>
      <geometry>
        <box size="0.6 0.3 0.3"/>
      </geometry>
    </collision>
    <inertial>
      <mass value="1.0"/>
      <inertia ixx="0.015" iyy="0.0375" izz="0.0375" ixy="0" ixz="0" iyz="0"/>
    </inertial>
  </link>

  <link name="front_caster">
    <visual>
      <geometry>
        <box size="0.1 0.1 0.3"/>
      </geometry>
      <material name="silver"/>
    </visual>
    <collision>
      <geometry>
        <box size="0.1 0.1 0.3"/>
      </geometry>
    </collision>
    <inertial>
      <mass value="0.1"/>
      <inertia ixx="0.00083" iyy="0.00083" izz="0.000167" ixy="0" ixz="0" iyz="0"/>
    </inertial>
  </link>

  <joint name="front_caster_joint" type="continuous">
    <axis xyz="0 0 1"/>
    <parent link="base_link"/>
    <child link="front_caster"/>
    <origin rpy="0 0 0" xyz="0.3 0 0"/>
  </joint>

  <link name="front_wheel">
    <visual>
      <geometry>
        <cylinder length="0.05" radius="0.035"/>
      </geometry>
      <material name="black"/>
    </visual>
    <collision>
      <geometry>
        <cylinder length="0.05" radius="0.035"/>
      </geometry>
    </collision>
    <inertial>
      <mass value="0.1"/>
      <inertia ixx="5.1458e-5" iyy="5.1458e-5" izz="6.125e-5" ixy="0" ixz="0" iyz="0"/>
    </inertial>
  </link>

  <joint name="front_wheel_joint" type="continuous">
    <axis xyz="0 0 1"/>
    <parent link="front_caster"/>
    <child link="front_wheel"/>
    <origin rpy="-1.5708 0 0" xyz="0.05 0 -.15"/>
  </joint>

  <link name="right_wheel">
    <visual>
      <geometry>
        <cylinder length="0.05" radius="0.035"/>
      </geometry>
      <material name="black">
        <color rgba="0 0 0 1"/>
      </material>
    </visual>
    <collision>
      <geometry>
        <cylinder length="0.05" radius="0.035"/>
      </geometry>
    </collision>
    <inertial>
      <mass value="0.1"/>
      <inertia ixx="5.1458e-5" iyy="5.1458e-5" izz="6.125e-5" ixy="0" ixz="0" iyz="0"/>
    </inertial>
  </link>

  <joint name="right_wheel_joint" type="continuous">
    <axis xyz="0 0 1"/>
    <parent link="base_link"/>
    <child link="right_wheel"/>
    <origin rpy="-1.5708 0 0" xyz="-0.2825 -0.125 -.15"/>
  </joint>

  <link name="left_wheel ...
(more)
2019-11-14 04:50:46 -0500 received badge  Famous Question (source)
2019-11-14 04:50:20 -0500 received badge  Notable Question (source)
2019-11-13 13:33:36 -0500 edited question laser scan not being used for costmap

laser scan not being used for costmap Hi All, I have a robot with a kinect sensor and a traditional lidar. The lidar is

2019-11-13 13:22:08 -0500 commented answer laser scan not being used for costmap

Thank you so much for your reply! i implemented your suggestions but im still not seeing the laser scan in my local cost

2019-11-11 10:12:41 -0500 received badge  Popular Question (source)
2019-11-08 17:53:48 -0500 received badge  Notable Question (source)
2019-11-08 17:53:48 -0500 received badge  Famous Question (source)
2019-11-08 10:31:37 -0500 received badge  Popular Question (source)
2019-11-08 01:13:58 -0500 edited question laser scan not being used for costmap

laser scan not being used for costmap Hi All, I have a robot with a kinect sensor and a traditional lidar. The lidar is

2019-11-08 01:13:38 -0500 asked a question laser scan not being used for costmap

laser scan not being used for costmap Hi All, I have a robot with a kinect sensor and a traditional lidar. The lidar is

2019-11-02 08:10:46 -0500 received badge  Notable Question (source)
2019-10-28 12:06:15 -0500 marked best answer Getting Transform Exception after implementing voxel layer

Hello friends,

I recently made the switch from a 2d costmap to a 2.5d voxel layer costmap based on suggestions from one of my previous posts. I decided to go with the spatio_temporal_voxel_layer plugin because it promised to have improvements over the standard voxel_grid plugin and it seemed to have a lot of documentation. After implementing and some troubleshooting i was able to get everything working and i am able to see the voxel grid in rviz. The issue i am having is that i am contunually getting the following errors:

[ERROR] [1571884523.045139138]: TF Exception for sensor frame: , cloud frame: camera_depth_optical_frame, Lookup would require extrapolation into the past.  Requested time 1571884512.575208000 but the earliest data is at time 1571884513.123332977, when looking up transform from frame [camera_depth_optical_frame] to frame [map]

[ERROR] [1571884524.605706427]: TF Exception for sensor frame: , cloud frame: camera_depth_optical_frame, Lookup would require extrapolation into the past.  Requested time 1571884513.871130000 but the earliest data is at time 1571884514.679229172, when looking up transform from frame [camera_depth_optical_frame] to frame [odom]

i thing this is effecting the function of the rest of the robot because then i place a navigation waypoint it takes a minute or two to do anything if at all.

results from rqt_tf_tree: image description costmap_common_params:

footprint: [[0.5,0.35],[0.5,-0.35],[-0.5,-0.35],[-0.5,0.35]]
raytrace_range: 3.0
obstacle_range: 2.5
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: camera_link, data_type: LaserScan, topic: scan, marking: true, clearing: true}

rgbd_obstacle_layer:
  enabled:               true
  voxel_decay:           30     #seconds if linear, e^n if exponential
  decay_model:           0      #0=linear, 1=exponential, -1=persistent
  voxel_size:            0.25   #meters
  track_unknown_space:   true   #default space is unknown
  observation_persistence: 0.0  #seconds
  max_obstacle_height:   2.0    #meters
  unknown_threshold:     15     #voxel height
  mark_threshold:        0      #voxel height
  update_footprint_enabled: true
  combination_method:    1      #1=max, 0=override
  obstacle_range:        6.0    #meters
  origin_z:              0.0    #meters
  publish_voxel_map:     true   # default off
  transform_tolerance:   0.2    # seconds
  mapping_mode:          false  # default off, saves map not for navigation
  map_save_duration:     60     #default 60s, how often to autosave
  observation_sources:   rgbd1_clear rgbd1_mark
  rgbd1_mark:
    data_type: PointCloud2
    topic: camera/depth/points
    marking: true
    clearing: false
    min_obstacle_height: 0.0     #default 0, meters
    max_obstacle_height: 1.0     #default 3, meters
    expected_update_rate: 0.0    #default 0, if not updating at this rate at least, remove from buffer
    observation_persistence: 0.0 #default 0, use all measurements taken during now-value, 0=latest 
    inf_is_valid: false          #default false, for laser scans
    clear_after_reading: true    #default false, clear the buffer after the layer gets readings from it
    voxel_filter: true           #default off, apply voxel filter to sensor, recommend on 
  rgbd1_clear:
    enabled: true                #default true, can be toggled on/off with associated service call
    data_type: PointCloud2
    topic: camera/depth/points
    marking: false
    clearing: true
    min_z: 0.1                   #default 0, meters
    max_z: 7.0                   #default 10, meters
    vertical_fov_angle: 0.7      #default 0.7, radians
    horizontal_fov_angle: 1.04   #default 1.04, radians
    decay_acceleration: 1.       #default 0, 1/s^2. If laser scanner MUST be 0
    model_type: 0                #default 0 (depth camera). Use 1 for 3D Lidar

global_params:

global_costmap:
  global_frame: map
  robot_base_frame: base_link ...
(more)
2019-10-24 20:46:22 -0500 commented answer Getting Transform Exception after implementing voxel layer

Im afraid you might be right about the CPU usage (see the image added to the post) I had a sneaking suspicion considerin

2019-10-24 20:40:25 -0500 edited question Getting Transform Exception after implementing voxel layer

Getting Transform Exception after implementing voxel layer Hello friends, I recently made the switch from a 2d costmap

2019-10-23 22:13:24 -0500 commented answer local costmap remember past obstacles

Thank you! i ended up getting the spatio_temporal_voxel_layer to work but im gettin a TF Exception error now as a result

2019-10-23 22:07:13 -0500 marked best answer local costmap remember past obstacles

A bit of background about my setup: I have a robot with a lidar sensor at about 4 feet off the ground and a kinect sensor at about 1.5 feet off the ground. the two sensors are directly above each other with the kinect pointing forward. this was done in a attempt to give the robot a full 360 degree view of the room while also giving it a view of lower objects like table legs. i am using depthimage_to_laserscan to get a laser scan out of the kinect. then im using laser_merge to combine the scan from the lidar and the scan from the kinect into a single laser scan. what results is a lasercan of the room that shows data from whatever sensor sees a closer object.

The Problem: Because the kinect has a small field of view, a couple degrees of rotation from the robot is enough for an object to go out of view. this results in the robot adding objects to the local costmap as it approaches but then clearing them as it rotates to go around. often it will then cut the corner really hard because it thinks there's nothing there and it hits the object. at this point it is often too close for the kinect to detect anything. Is there a way to have some sort of buffer where the local costmap remembers for a short period that something was there even when it leaves its view? and if not do you all have any suggestions of how i can prevent my robot from hitting things?

Thanks!

EDIT: Here are my config files

common params

footprint: [[0.5, 0.35], [0.5, -0.35], [-0.5, -0.35], [-0.5, 0.35]]
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: camera_link, data_type: LaserScan, topic: scan, marking: true, clearing: true}

local params

local_costmap:
  global_frame: /odom
  robot_base_frame: base_link
  rolling_window: true

global params

global_costmap:
  global_frame: /map
  robot_base_frame: base_link
  static_map: true
  transform tolerance: 0.5

base local planner params

TrajectoryPlannerROS:
  holonomic_robot: false
  max_vel_x: 0.25
  max_vel_theta: 0.25
  min_in_place_vel_theta: 0.25

New common params after raytrace_range was added

footprint: [[0.75,0.35],[0.75,-0.35],[-0.5,-0.35],[-0.5,0.35]]
raytrace_range: 1.5

observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: camera_link, data_type: LaserScan, topic: scan, marking: true, clearing: true}
2019-10-23 22:05:58 -0500 asked a question Getting Transform Exception after implementing voxel layer

Getting Transform Exception after implementing voxel layer Hello friends, I recently made the switch from a 2d costmap

2019-10-22 07:16:59 -0500 received badge  Famous Question (source)
2019-10-21 23:17:38 -0500 commented answer local costmap remember past obstacles

I think I might need a bit more guidance i tried including the VoxelCostmapPlugin - {name: voxel_layer, type: "costmap_

2019-10-20 20:27:43 -0500 received badge  Famous Question (source)
2019-10-16 00:14:47 -0500 edited question local costmap remember past obstacles

local costmap remember past obstacles A bit of background about my setup: I have a robot with a lidar sensor at about 4

2019-10-16 00:14:27 -0500 edited question local costmap remember past obstacles

local costmap remember past obstacles A bit of background about my setup: I have a robot with a lidar sensor at about 4

2019-10-16 00:12:22 -0500 commented question local costmap remember past obstacles

I have played a bit with raytrace_range and obstacle_range. I tried setting raytrace_range to a lower value so that it w

2019-10-15 22:19:08 -0500 edited question local costmap remember past obstacles

local costmap remember past obstacles A bit of background about my setup: I have a robot with a lidar sensor at about 4

2019-10-15 11:35:18 -0500 received badge  Notable Question (source)
2019-10-15 09:28:26 -0500 received badge  Popular Question (source)
2019-10-14 21:50:16 -0500 commented question local costmap remember past obstacles

Yes, because the laser (which can see over many tables and chairs) cant see the obstacles it gets cleared from the local

2019-10-14 21:47:39 -0500 edited question local costmap remember past obstacles

local costmap remember past obsticles A bit of background about my setup: I have a robot with a lidar sensor at about 4

2019-10-14 18:55:19 -0500 commented question Get local costmap to remember out of sight objects

my apologies, im not sure how that happened

2019-10-14 16:39:40 -0500 asked a question local costmap remember past obstacles

local costmap remember past obsticles A bit of background about my setup: I have a robot with a lidar sensor at about 4

2019-10-14 16:35:32 -0500 asked a question Get local costmap to remember out of sight objects

Get local costmap to remember out of sight objects A bit of background about my setup: I have a robot with a lidar senso

2019-09-30 22:31:48 -0500 commented question Use ira_laser_tools to fuse the rplidar A2 and Kinect laser scan,but the output of the merger is only kinect's data.

Im having the exact same issue. Did you ever find a solution?