ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

I think the best course of action is to have a single nodelet that performs all your CUDA-related processing. Nodelets are really meant to provide effective means of passing messages (i.e. to efficiently provide the message part of a ROS API) and not for "transporting" another library's (like CUDA's) API. It might be possible to distribute CUDA processing across nodelets in a hacky way by passing around pointers (or data about whole contexts) in messages or so, but this sounds brittle.

You could for instance use pluginlib inside your single CUDA nodelet to selectively load CUDA processing plugins at start- or run-time, which to some extend would emulate the flexibility of nodelets. The drawback is that this approach hasn't been standardized, so there is no existing code.

Ecto might also be interesting to look at, as it also aims at creating processing pipelines. I haven't seen CUDA related code for it though.