ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Hardware Requirements ROS + Intel Realsense Depth Camera

asked 2019-12-09 09:17:24 -0500

bvaningen gravatar image

updated 2019-12-09 09:19:17 -0500

I want to create an imaging system that uses an Intel Realsense Depth Camera D415 to locate an aerial robot in its view, to then subsequently control the robot. I need the system to work at a rate of 60 frames per second with the use of the OpenCV library. I am unable to find any examples online which specify the speed at which their imaging ros systems are working at.

What computer hardware requirements (processor, RAM, etc) would I need to be able to reach these specifications? Are there any examples that I may have missed? What would be the difference between using ROS1 or ROS2?

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2019-12-09 11:00:46 -0500

It dramatically depends on what you want to do with your camera.

What algorithms are you running? What's your benchmark on how fast you can run this on another machine? There's nothing in particular special about the realsense other than the driver's overhead. Have you done your evaluation of your processing pipeline and determined the hardware requirements for that alone? Then you should just need to add the realsense driver overhead to it.

edit flag offensive delete link more

Comments

that is a good point. The problem is that the system that I want to build cannot run on a standard desktop or laptop computer, since it needs to have a small dimensional footprint. i am considering the intel NUC computers. If the ROS middleware adds too much overhead that performance becomes an issue, then I may have to consider other routes. I will have to miss out on the useful ROS tools.

I guess what I really am asking is: is there a minimum hardware requirement when working with high frame rate video and ROS? Maybe a potential example?

bvaningen gravatar image bvaningen  ( 2019-12-18 03:59:06 -0500 )edit
1

As mentioned above it completely depends on what you want to do with the data. There's very little overhead if you do nothing with the data. If you setup a large pipeline with many different sections which interact and requires copying of the data by requiring multiple write access copies to the image data at frame rate you will need a lot more resources. ROS is a tool that can do what you ask it to do. We can't estimate what resources it will take unless you can provide a description of what you actually want it to do for you. And since providing this description is hard to do accurately most people test on representative systems and try to extrapolate.

From my experience, and from the answer above, on a well designed system the majority of the computatonal time will be spent in your actual computer vision ...(more)

tfoote gravatar image tfoote  ( 2019-12-18 04:39:34 -0500 )edit

Personally having used a Realsense D435i, I have to say there are other considerations as well. We used an intel celeron x86 based SBC and even then the D435i was outputting so much data via USB3 that it took up one of the 4 cores available on the SBC. Mind you, this SBC was supposed to handle autonomous navigation, obstacle avoidance, object recognition, path planning, etc, and just publishing realsense data took up almost 1 core (more than 1 core if pointclouds are published). Other people I talked to also mentioned that the D435i driver was unstable due to too much data being pumped, and in the end they used it like a normal RGB camera. If you can afford it, I would highly recommend an intel NUC (i5/i7) as those guys will be able to handle the realsense better. But the nuc has limited port availability, since the ...(more)

hashirzahir gravatar image hashirzahir  ( 2020-04-20 07:52:02 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2019-12-09 09:17:24 -0500

Seen: 495 times

Last updated: Dec 09 '19