ROS2 Realsense node stops publishing image data after calling up other nodes

asked 2022-10-13 05:55:46 -0500

chivas1000 gravatar image

updated 2022-10-14 04:24:30 -0500

ravijoshi gravatar image

Hi, I'm using ROS2 and realsense D435i to implement a navigation application. I modified this example which brings up the realsense node, nvblox, and VSLAM. It runs well when all nodes are in the same machine. I divided them, such as the realsense node only running on the jetson board on my robots, nvblox, and VSLAM runs in the edge server. The robot and edge server are connected with ethernet cables. Please note that I have tested that they can ping and use the talker listener examples from ROS2 well.

Problem: When I start the realsense node at the jetson board side, I can see topics and image data in the edge server. But when I bring up the nvblox and VSLAM nodes, the edge side doesn't see the image, but the topic name is still there, and the jetson side can see the image data.

Is there any mistake in my launch files, or is something missing in the network setup?

Below are my launch files.

The nvr_debug_jetson.launch.py file:

# license removed for brevity

import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch_ros.actions import Node
from launch_ros.descriptions import ComposableNode
from launch_ros.actions import ComposableNodeContainer


def generate_launch_description():
    realsense_config_file_path = os.path.join(
        get_package_share_directory("nvblox_examples_bringup"),
        "config",
        "realsense.yaml",
    )

    realsense_node = ComposableNode(
        namespace="camera",
        package="realsense2_camera",
        plugin="realsense2_camera::RealSenseNodeFactory",
        parameters=[realsense_config_file_path],
    )

    realsense_splitter_node = ComposableNode(
        namespace="camera",
        name="realsense_splitter_node",
        package="realsense_splitter",
        plugin="nvblox::RealsenseSplitterNode",
        parameters=[{"input_qos": "SENSOR_DATA", "output_qos": "SENSOR_DATA"}],
        remappings=[
            ("input/infra_1", "/camera/infra1/image_rect_raw"),
            ("input/infra_1_metadata", "/camera/infra1/metadata"),
            ("input/infra_2", "/camera/infra2/image_rect_raw"),
            ("input/infra_2_metadata", "/camera/infra2/metadata"),
            ("input/depth", "/camera/depth/image_rect_raw"),
            ("input/depth_metadata", "/camera/depth/metadata"),
            ("input/pointcloud", "/camera/depth/color/points"),
            ("input/pointcloud_metadata", "/camera/depth/metadata"),
        ],
    )

    realsense_container = ComposableNodeContainer(
        name="realsense_container",
        namespace="",
        package="rclcpp_components",
        executable="component_container",
        composable_node_descriptions=[realsense_node, realsense_splitter_node],
        output="screen",
    )

    base_link_tf_node = Node(
        package="tf2_ros",
        executable="static_transform_publisher",
        arguments=["0.16", "0", "0.11", "0", "0", "0", "1", "base_link", "camera_link"],
    )

    return LaunchDescription([realsense_container, base_link_tf_node])

The nvr_debug_edge.launch.py file:

# license removed for brevity

import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node


def generate_launch_description():
    visual_slam_node = Node(
        name="visual_slam_node",
        package="isaac_ros_visual_slam",
        executable="isaac_ros_visual_slam",
        parameters=[
            {
                "enable_rectified_pose": True,
                "denoise_input_images": False,
                "rectified_images": True,
                "enable_debug_mode": False,
                "debug_dump_path": "/tmp/vslam",
                "enable_slam_visualization": True,
                "enable_landmarks_view": True,
                "enable_observations_view": True,
                "map_frame": "map",
                "odom_frame": "odom",
                "base_frame": "base_link",
                "input_left_camera_frame": "camera_infra1_frame",
                "input_right_camera_frame": "camera_infra2_frame",
                "enable_localization_n_mapping": True,
                "publish_odom_to_base_tf": True,
                "publish_map_to_odom_tf": True,
            }
        ],
        remappings=[
            ("stereo_camera/left/image", "/camera/infra1/image_rect_raw"),
            ("stereo_camera/left/camera_info", "/camera/infra1/camera_info"),
            ("stereo_camera/right/image", "/camera/infra2/image_rect_raw"),
            ("stereo_camera/right/camera_info", "/camera/infra2/camera_info"),
        ],
    )

    nvblox_config = DeclareLaunchArgument(
        "nvblox_config",
        default_value=os.path.join(
            get_package_share_directory("nvblox_examples_bringup"),
            "config",
            "nvblox.yaml",
        ),
    )

    nvblox_node = Node(
        package="nvblox_ros",
        executable="nvblox_node",
        parameters=[LaunchConfiguration("nvblox_config")],
        output="screen",
        remappings=[
            ("depth/camera_info", "/camera/depth/camera_info"),
            ("depth/image", "/camera/realsense_splitter_node/output/depth"),
            ("color/camera_info", "/camera/color/camera_info"),
            ("color/image", "/camera/color/image_raw"),
        ],
    )

    rviz_config_path = os.path.join(
        get_package_share_directory("nvblox_examples_bringup"),
        "config",
        "nvblox_vslam_realsense.rviz",
    )

    print(rviz_config_path)

    rviz = Node(
        package="rviz2",
        executable="rviz2",
        arguments=["-d", rviz_config_path],
        output="screen",
    )

    return LaunchDescription([nvblox_config, visual_slam_node, nvblox_node, rviz])

Below is the ... (more)

edit retag flag offensive close merge delete

Comments

Network seem to be stuck after I bring up the VSLAM and NVBLOX nodes on server, because after I launch realsense on jetson, the server receive topic and data

But after I launched vslam and nvblox, camera data doesn't avaliable on my server, also I run ros2 run demo_nodes_cpp talker at jetson and ros2 run demo_nodes_py listener on server, server does not receive too.

I assume that the launch of the server side blocked the network and they can't see each other then.

chivas1000 gravatar image chivas1000  ( 2022-10-13 06:29:14 -0500 )edit

noting that the nodes on edges and servers are all running in the container, so is that the NAT issues?

chivas1000 gravatar image chivas1000  ( 2022-10-13 07:30:40 -0500 )edit

Updates:

When I started only realsense node at jetson container, and ros2 topic echo depth topic on the server, it only shows the first frame and stops, neither other image topics, but I ros2 topic echo at the jetson, the depth can shows.

Another tips is that I installed my system in the sd card at jetson, together with the error logs occured by realsense node, so is that the sd card I/O bottlenecked the buffer of transfer which disobeyed the QoS and dropped the connection?

while my jetson can receive about 25fps messages, the server only receives 3 messages and stopped:

so do the inferred images, when I only echo inferred 1 at server, it receives and bandwidth is about 9MB/s 9.22 MB/s from 100 messages Message size mean: 0.31 MB min: 0.31 MB max: 0.31 MB

but when I echo another ...(more)

chivas1000 gravatar image chivas1000  ( 2022-10-14 04:40:28 -0500 )edit

I think the problem might be similar to this:

https://support.intelrealsense.com/hc...

and

https://answers.ros.org/question/3893...

so is that the hardware issue and if it can fixed with switching the system from sd card to SSD and reducing depth frame rate? Or, I might just using an Intel nuc to run realsense node and transfer image topics since it better deal with realsense camera?

I'm waiting for the ssd and putting on it to see if it can fix.

==========================================================

here is my realsense yaml files:

device_type: '' serial_no: '' usb_port_id: ''

rgb_camera: profile: '640x480x15' color_qos: "SENSOR_DATA"

depth_module: profile: '640x480x15' emitter_enabled: 1 emitter_on_off: true depth_qos: "SENSOR_DATA" depth_info_qos: "SYSTEM_DEFAULT"

infr

chivas1000 gravatar image chivas1000  ( 2022-10-14 04:43:20 -0500 )edit

sorry for comments one, the talker and listener examples work, it might just because I opened it with 4 image topics transferring which dropped the connection and so do the talker and listener examples. Anyway, I supposed that network setup is good now and the issue are more likely to do with realsense node.

chivas1000 gravatar image chivas1000  ( 2022-10-14 05:08:25 -0500 )edit