Detect moving obstacles for navigation
Hello,
I am currently using a Kinectv2 with rtabmap_ros to create a 3D and a 2D projected map of the surroundings. I then navigate through the static surrounding with the 2D projected map and the navigation stack of ROS.
Now I want to detect moving obstacles on this map, estimate their velocities and predict and avoid possible collisions in the future. My first attempt was to calculate the optical_flow with the opencv_apps package of the 2D projected map. But the resolution is not good enough for the optical flow to work reliable. My idea is to calculate the optical flow directly on the output image of the kinect camera and transform it to my map coordinate frame. But i am stuck here with several problems:
- How do I effectively compensate the camera's own movement? I now its velocity published by the visual odometry node of rtabmap. Can I simply transform these to the image frame?
- How can I transform my pixel coordinates, that i get from the optical flow to my map_frame? I guess i need to use the camera's parameters i get from the camera_info topic and then transform from the kinect_rgb_optical_frame to the map_frame, but i am not sure on this one.
- Is it easier or possible to directly calculate the flow on the pointcloud or depth_image? Because here i would have direct information about the depth, which the image does not provide by itself.
I would appreciate any help or opinions on my thoughts. If anyone has a better idea to solve this problem, let me now! Thanks in advance,
Sven
Did you find an answer to this? I am looking into similar questions. Thanks!