Robotics StackExchange | Archived questions

RViz camera view, pointcloud projection

I'm working on a project with a VLP-16 lidar and an RGB camera, and I've been trying to project the pointclouds acquired from the lidar onto the images from the RGB camera. After doing this somewhat successfully, I realized that RViz already does this in its "camera" visualization tool. I also noticed that it seems to capture slightly more projections than the node i made myself. In particular, it seems like some points that are close to the lidar do not get picked up by my method

What does RViz do behind the hood to achieve this visualization, and is there a better way for me to achieve this pixel <-> point correspondence? My end goal is to create a correspondence between the pixels in my image and the points in [x, y, z], captured by my lidar sensor.

In my code, i transform the pointcloud into the camera frame, then project the points that are in front of the camera onto the image (using the 3DPointToPixel method from image_geometry), so i end up with a list of pixels and their corresponding 3D points, and it works decently (not yet optimized). However, RViz seems to do a better job, and I want to leverage as many pre-existing tools as possible.

Any suggestions on how to leverage the fact that RViz seemingly already does this transformation/projection, or some other tool to achieve this pixel <-> point correspondence?

In the picture, my method is shown on the left, RViz on the right, and you can see that I'm missing some projections. My code is not at all optimized and probably inefficient, but I'll include it too.

#!/usr/bin/env python3
import rospy
import image_geometry
from sensor_msgs.msg import CameraInfo, Image, PointCloud2
import numpy as np
import ros_numpy
import tf
import cv2
import time

rospy.init_node('project_points')

cam = image_geometry.PinholeCameraModel()
camera_info = rospy.wait_for_message('/camera/color/camera_info', CameraInfo)
cam.fromCameraInfo(camera_info)
listener = tf.TransformListener()
listener.waitForTransform('camera_color_optical_frame', 'velodyne',rospy.Time(), rospy.Duration(5))
translation, rotation = listener.lookupTransform('camera_color_optical_frame', 'velodyne', rospy.Time(0))

#velodyne to camera rotation
rotation_matrix = np.array([[0, -1, 0],
                            [0, 0, -1],
                            [1, 0, 0]])

while True:
    ros_img = rospy.wait_for_message('/camera/color/image_raw', Image)
    velo_pcl2 = rospy.wait_for_message('/velodyne_points', PointCloud2)

    img = np.asarray(ros_numpy.numpify(ros_img))
    pcl = ros_numpy.point_cloud2.pointcloud2_to_xyz_array(velo_pcl2)

    projected_points = []
    projected_pixels = []

    ## transform the pointcloud into the camera optical frame
    #translation
    pcl = pcl + translation 

    #rotation
    pcl = np.matmul(rotation_matrix, np.transpose(pcl))
    pcl = np.transpose(pcl)
    #the pointcloud is now in the camera frame

    #loop through the pointcloud and check for points ahead of the camera, which project onto the image
    for point in  pcl:
        if point[2] > 0:
            pixel = cam.project3dToPixel(point)
            if (pixel[0] > 0 and pixel[0] < 639) and (pixel[1] > 0 and pixel[1] < 479):
                projected_pixels.append(pixel)
                projected_points.append(point)

image description

Asked by edvart-ros on 2022-12-17 17:46:44 UTC

Comments

Answers