Hardware Requirements ROS + Intel Realsense Depth Camera
I want to create an imaging system that uses an Intel Realsense Depth Camera D415 to locate an aerial robot in its view, to then subsequently control the robot. I need the system to work at a rate of 60 frames per second with the use of the OpenCV library. I am unable to find any examples online which specify the speed at which their imaging ros systems are working at.
What computer hardware requirements (processor, RAM, etc) would I need to be able to reach these specifications? Are there any examples that I may have missed? What would be the difference between using ROS1 or ROS2?
Asked by bvaningen on 2019-12-09 10:17:24 UTC
Answers
It dramatically depends on what you want to do with your camera.
What algorithms are you running? What's your benchmark on how fast you can run this on another machine? There's nothing in particular special about the realsense other than the driver's overhead. Have you done your evaluation of your processing pipeline and determined the hardware requirements for that alone? Then you should just need to add the realsense driver overhead to it.
Asked by stevemacenski on 2019-12-09 12:00:46 UTC
Comments
that is a good point. The problem is that the system that I want to build cannot run on a standard desktop or laptop computer, since it needs to have a small dimensional footprint. i am considering the intel NUC computers. If the ROS middleware adds too much overhead that performance becomes an issue, then I may have to consider other routes. I will have to miss out on the useful ROS tools.
I guess what I really am asking is: is there a minimum hardware requirement when working with high frame rate video and ROS? Maybe a potential example?
Asked by bvaningen on 2019-12-18 04:59:06 UTC
As mentioned above it completely depends on what you want to do with the data. There's very little overhead if you do nothing with the data. If you setup a large pipeline with many different sections which interact and requires copying of the data by requiring multiple write access copies to the image data at frame rate you will need a lot more resources. ROS is a tool that can do what you ask it to do. We can't estimate what resources it will take unless you can provide a description of what you actually want it to do for you. And since providing this description is hard to do accurately most people test on representative systems and try to extrapolate.
From my experience, and from the answer above, on a well designed system the majority of the computatonal time will be spent in your actual computer vision algorithm not in the interactions from any overhead of using ROS versus any other integration framework.
Asked by tfoote on 2019-12-18 05:39:34 UTC
Personally having used a Realsense D435i, I have to say there are other considerations as well. We used an intel celeron x86 based SBC and even then the D435i was outputting so much data via USB3 that it took up one of the 4 cores available on the SBC. Mind you, this SBC was supposed to handle autonomous navigation, obstacle avoidance, object recognition, path planning, etc, and just publishing realsense data took up almost 1 core (more than 1 core if pointclouds are published). Other people I talked to also mentioned that the D435i driver was unstable due to too much data being pumped, and in the end they used it like a normal RGB camera. If you can afford it, I would highly recommend an intel NUC (i5/i7) as those guys will be able to handle the realsense better. But the nuc has limited port availability, since the camera only has a usb3 interface. It is also not known to work reliably over USB hub (power and data requirement). Realsense is good, if you can get it to work reliably.
Asked by hashirzahir on 2020-04-20 07:52:02 UTC
Comments