ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

simulated stereo camera fails when remapping topics

asked 2017-06-07 03:42:10 -0500

mark_vision gravatar image

Dear all, I'm trying to simulate the Multisense SL device in Gazebo. I've started by modifying the plugin available here: https://bitbucket.org/osrf/drcsim/src...

The Gazebo-related URDF is the following:

<?xml version="1.0"?>
<robot xmlns:xacro="http://ros.org/wiki/xacro">
    <xacro:macro name="multisense_sl_sensor" params="prefix robot_name">
        <gazebo reference="head_root">
            <mu1>0.9</mu1>
            <mu2>0.9</mu2>
        </gazebo>
        <gazebo reference="head">
            <mu1>0.9</mu1>
            <mu2>0.9</mu2>
        </gazebo>
        <gazebo reference="${prefix}/hokuyo_link">
            <mu1>0.9</mu1>
            <mu2>0.9</mu2>
        </gazebo>
        <gazebo reference="${prefix}/head_hokuyo_frame">
            <mu1>0.9</mu1>
            <mu2>0.9</mu2>
        </gazebo>
        <gazebo reference="${prefix}/motor_joint">
            <implicitSpringDamper>1</implicitSpringDamper>
        </gazebo>
        <gazebo reference="${prefix}/head_hokuyo_frame">
            <sensor type="gpu_ray" name="head_hokuyo_sensor">
                <pose>0 0 0 0 0 0</pose>
                <visualize>false</visualize>
                <update_rate>40</update_rate>
                <ray>
                    <scan>
                        <horizontal>
                            <samples>1081</samples>
                            <resolution>1</resolution>
                            <min_angle>-2.356194</min_angle>
                            <max_angle>2.356194</max_angle>
                        </horizontal>
                    </scan>
                    <range>
                        <min>0.10</min>
                        <max>30.0</max>
                        <resolution>0.01</resolution>
                    </range>
                    <noise>
                        <type>gaussian</type>
                        <!-- Noise parameters based on published spec for Hokuyo laser
                        achieving "+-30mm" accuracy at range < 10m.  A mean of 0.0m and
                        stddev of 0.01m will put 99.7% of samples within 0.03m of the true
                        reading. -->
                        <mean>0.0</mean>
                        <stddev>0.0</stddev>
                    </noise>
                </ray>
                <plugin name="${prefix}/head_hokuyo_controller" filename="libgazebo_ros_gpu_laser.so">
                    <topicName>${prefix}/lidar_scan</topicName>
                    <frameName>${prefix}/head_hokuyo_frame</frameName>
                </plugin>
            </sensor>
        </gazebo>

        <gazebo reference="${prefix}/left_camera_frame">
            <sensor type="multicamera" name="${prefix}/stereo_camera">
            <!-- see MultiSenseSLPlugin.h for available modes -->
            <update_rate>10.0</update_rate>
            <camera name="left">
                <!-- Spec sheet says 80deg X 45deg @ 1024x544pix.  Based on feedback
                from teams, we're instead doing 80deg X 80deg @ 800x800pix. -->
                <horizontal_fov>1.3962634</horizontal_fov>
                
                <clip>
                    <near>0.02</near>
                    <far>300</far>
                </clip>
                <noise>
                    <type>gaussian</type>
                    <!-- Noise is sampled independently per pixel on each frame.  
                    That pixel's noise value is added to each of its color
                    channels, which at that point lie in the range [0,1].
                    The stddev value of 0.007 is based on experimental data 
                    from a camera in a Sandia hand pointed at a static scene
                    in a couple of different lighting conditions.  -->
                    <mean>0.0</mean>
                    <stddev>0.007</stddev>
                </noise>
            </camera>
            <camera name="right">
                <pose>0 -0.07 0 0 0 0</pose>
                <!-- Spec sheet says 80deg X 45deg @ 1024x544pix.  Based on feedback
                from teams, we're instead doing 80deg X 80deg @ 800x800pix. -->
                <horizontal_fov>1.3962634</horizontal_fov>
                
                <clip>
                    <near>0.02</near>
                    <far>300</far>
                </clip>
                <noise>
                    <type>gaussian</type>
                    <!-- Noise is sampled independently per pixel on each frame.  
                    That pixel's noise value is added to each of its color
                    channels, which at that point lie in the range [0,1].
                    The stddev value of 0.007 is based on experimental data 
                    from a camera in a Sandia hand pointed at a static scene
                    in a couple of different ...
(more)
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2017-06-07 04:45:54 -0500

mark_vision gravatar image

I've just realized that the stereo processing node does not produce only the pointclouds but also many other topics, including image_color.

The processing node was autofeeding himeself, causing an increment in frequency of the stereo pairs and making the stereo processing too slow to compute.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2017-06-07 03:42:10 -0500

Seen: 306 times

Last updated: Jun 07 '17