ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange |
1 | initial version |
I am not sure i completely get what your goal looks like, but from what i understand these points could be of interest to you:
align_depth
parameter and the aligned_depth_to_color
topics of the realsense driver. This allows you to easily read out the depth associated with pixels in your color image (for more information see this post). Beware that depth images might just contain some 0
s where no depth information is available.camera_info
topic to e.g. calculate the ray defined by a pixel inside your bounding box (see the projectPixelTo3dRay()
function). Based on this you should be able to calculate any information you need to plan a robot movement.With these, you should be able to write a node that does this:
image_geometry::PinholeCameraModel
, and listens to the camera_info
topic to update it.image_transport::Subscriber
s to listen to both the color and the aligned depth image from your camera and keeps the most current one.I hope this gives an overview over the basic steps required. You might need some simple things like filtering to keep your estimate from jumping around too much (had that happen with realsense depth images before), but this should be apparent once you see your first estimates. If you want to actually move your robot to the detected position you might also need to use TF2 to get your estimate into a suitable base frame.
If i missed some points from your question or if you have remaining questions, feel free to ask.