Whats the difference between feeding 2-d point clouds and 2-d images to a CNN [closed]
Can a projection of 3 d point cloud to a 2d space be fed to a CNN in the same way that a normal 2-d image is fed in? In terms of model and architecture what would need to be changed?
This does not seem to be a ROS question.