ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Verify if world point is in camera's field of view

asked 2018-06-04 09:54:26 -0500

SancheZ gravatar image

updated 2018-06-04 09:56:36 -0500

Hi, I'm trying to verify if a point in the world is in the field of view of the Kinect that I'm using. I have the coordinates of that point in the world (x,y,z), afterwards I can transform that point to the camera_depth_optical_frame using something like:


transform = self.tf_buffer.lookup_transform('camera_depth_optical_frame', 'map', rospy.Time(0), rospy.Duratio(1.0))
object_on_camera = tf2_geometry_msgs.do_transform_pose(object_on_world, transform) #point on camera_depth_optical_frame
Notice that "object_on_world" and "object_on_camera" are PointStamped structures. Given that point, how can I know if the camera should be seeing it or not? I think I need to get the horizontal angle and the vertical angle that this point is doing with the camera using the depth 'z', right? Then i need to verify if that lays in the accepted angles for the fov. But how do i get these angles using the camera_depth_optical_frame? I'm using openni for the kinect v1 drivers. Another thing, anyone knows the kinect v1 field of view exactly?

Thanks for your patience, SancheZ

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
2

answered 2018-06-04 13:44:13 -0500

lucasw gravatar image

updated 2018-06-04 15:34:20 -0500

NEngelhard gravatar image

Do you have a (valid) camera_info along with the images from the Kinect?

If so then answer.ros: "project pointcloud to image".

I don't know if there is a complete end to end example anywhere but looking up project3dToPixel produces a lot of results. If I get around to it I'll update this answer with one or link to another answer that does (or anyone else is welcome to do that...)

If the projected point output is within the resolution of the camera (between 0,0 and image width/height in pixels) then it should be visible, probably an additional check is required to make sure the object is in front of the camera and not behind if- if the z value from the transformed point is positive or not.

If you still are interesting in kinect angles-of-view you can calculate them from the fx/fy parameters in the camera_info. https://stackoverflow.com/questions/3...

edit flag offensive delete link more
0

answered 2018-06-05 08:50:47 -0500

SancheZ gravatar image

updated 2018-06-05 08:55:44 -0500

I was able to check if the objects are in the field of view, using the approach that I mentioned above. I project the object point to the camera_depth_optical_frame. Using the arctan of (x/z) you get the angle the horizontal angle and so on. I have also another verification, if the object is not in the room where the robot is located is not in field of view also.


def verify_field_of_view(self,object_x_pos,object_y_pos,object_z_pos): 
        # object should be in the field of view of the camera and should be in the room that we are in
        bl_pose = PoseStamped() 
        bl_pose.header.frame_id = "base_link" #or odom?!?
        bl_pose.pose.orientation.w = 1.0
        bl_pose.pose.position.x = 0.0
        bl_pose.pose.position.y = 0.0

    transform = self.tf_buffer.lookup_transform("map", "base_link",  rospy.Time(0), rospy.Duration(1.0))
    bot_in_world = tf2_geometry_msgs.do_transform_pose(bl_pose, transform) #bot_in_world_position
    current_room = self.room_retriever(xpos = bot_in_world.pose.position.x, ypos = bot_in_world.pose.position.y) #room where robot is located
    if current_room != self.room_retriever(xpos = object_x_pos, ypos = object_y_pos): #room where object belongs
        print 'object doesnt belong to this room'
        return False

    object_on_world = PoseStamped()
    object_on_world.header.frame_id = 'map'
    object_on_world.pose.orientation.w = 1.0
    object_on_world.pose.position.x = object_x_pos
    object_on_world.pose.position.y = object_y_pos
    object_on_world.pose.position.z = object_z_pos

    transform = self.tf_buffer.lookup_transform('camera_depth_optical_frame', 'map', rospy.Time(0), rospy.Duration(1.0))
    object_on_camera = tf2_geometry_msgs.do_transform_pose(object_on_world, transform) #point on camera_depth_optical_frame

    if object_on_camera.pose.position.z < self.min_depth or object_on_camera.pose.position.z > self.max_depth:
        return False #objects behind the camera, with a distance minor/higher than the accepted are not considered in the field of fiew 

    yangle = np.arctan(object_on_camera.pose.position.y/object_on_camera.pose.position.z) #rads
    xangle = np.arctan(object_on_camera.pose.position.x/object_on_camera.pose.position.z) #rads

    fovx = 57 #horizontal field of view in deg
    fovy = 43 #vertical field of view in deg 
    if xangle < abs(fovx/2*math.pi/180) and yangle < abs(fovy/2*math.pi/180): #57/43
        return True
    else:
        return False

Another thing, is really easy to get the field of view of your kinect, but I need to know the intrinsic parameters of the camera, where is that info? Wich parameters does openni uses by default, I never calibrated the kinect. Any suggestion to get the intrinsics? We can get the field of view from Intrinsics

edit flag offensive delete link more

Question Tools

3 followers

Stats

Asked: 2018-06-04 09:54:26 -0500

Seen: 1,960 times

Last updated: Jun 05 '18