Is people detection enough for them to be marked on a simulated map like in rviz?
Hello all.
We are now in the later part of our project. We have already integrated ROS with YOLO for people detection and have also been able to make the robot avoid obstacles using sonars and IRs. My question this time is how can we mark a detected person in a simulated map just to know where it is? We only have sonars, IRs, and the raspberry pi camera used in YOLO. We do not have any kinect or depth sensors. Is this possible? And if we do have rgb-d cameras, can ROS-YOLO do this alone? Or do we need some people tracking package too? Can a robot go to a detected person without some sort of person tracking package? Really confused on people detection vs. people tracking. Can someone enlighten me please?
Typically:
If I want my robot to go to a detected person, do I need a tracking package?