Deep learning, Pixel space, Pointclouds, and Camera Space
Hello all,
This may be a bit of an odd question, but I right now have a stereo camera that has been calibrated and outputs a PCL2 message that I can visualize in Rviz. The matching is decent, so it looks like it will work for what I need it to do. My question is this:
I have a deep learning function that I can pass images images to and find an object of interest in it. I get the object's location in the left (or I could do right) image in (X, Y) pixel coordinates. My question is this:
How do I take the PCL that I have publishing, and take the pixel location (x,y) for an object I want to find in space and get the X, Y Z coordinates IN camera space (which the pointcloud is in reference to)
I'm working in python right now. What do I need to do with image space X, Y to read an X, Y, Z image in the pointcloud and that camera space XYZ to an end effector for it to act on the object.
Thanks in advance!