Callback with GPU processing
I have a python node that does heavy image processing on GPU. If a method call on image processing instance is invoked inside the callback function, the GPU computation appears to be copied to CPU (i.e. it's heavily loaded too and results in as-on-CPU time). The only way to solve it was to have the callback function activate a global trigger and run image processing in Main only when the trigger is activated. This doesn't allow using services of course.
Does anyone know what is the underlying logic that's causing it and whether there's a more elegant solution? Thanks.
Not really an answer, but I seem to remember this has something to do with the 'context' in which Python calls into your GPGPU library. Ie: some objects are not copied / shared correctly between the main thread (that initialises your GPGPU access) and the thread handling the svc invocation.
Hello, did you ever find a solution to this problem as I'm experiencing the same issue
Not quite. I decided to stick to the original workaround, i.e. activate a global trigger by subscriber callback and execute image processing only when the trigger is on.
Are you achieving this in Python? I'd like to implement the same but I'm not sure how to check a global variable in the main thread once
rospy.spin()
is invoked. Plus there's no rospy equivalent ofspinOnce()
. I may end up having to write a C++ wrapper which I'd ideally like to avoid.You can ignore my comment/question, I completely forgot that you can do the following
while not rospy.is_shutdown()
. Thanks for your help