Camera Nodelet using GPU memory?
Hello all,
This is my first question, so apologize for any formatting mistakes.
I developed a simple driver in CPP for a IDS camera using IDS drivers and ROS nodelet. The cameras publish using image transport. However, I noticed that there is approximate 100 MB of GPU memory usage for every new camera publishing. Since my system contains multiple deep neural networks nodes running I would like to have as much as possible of my GPU memory free.
Why is this happening? Anyone has any idea?
My guess is that image transport may be using the GPU for image compression, or other tasks. If so, there is a way to disable it and do everything in CPU?
Quick comment: it's very likely the IDS driver is using the GPU. I'm not aware of code in
image_transport
doing this (or at least not directly (ie: using any of the regular ways to do it, such as CUDA or OpenCL)). But asimage_transport
is plugin based, you could of course have plugins installed which do. We cannot know. That would be something for you to check.Indeed, to rule out the possibility of the IDS driver using the GPU, I am using DEVICE INDEPENDENT BITMAP (DIB) Mode, which does not use GPU processing or memory, instead uses the CPU and computer memory. Since I am allocating the memory and copying the frame from the camera buffer myself, I believe that it is not using GPU. I will take a look into the plugins. Currently we have compressed and theora subtopics being published. I will take a look into them and maybe disabling them to check.