You don't want to run an infinite loop inside the service -- while services can be stateful, they are modeled as a reques -> reply interface. You want to call it and get a response quickly, without blocking if possible. When your service depends on synchronizing messages from several sources that come in at different times (waiting on a particular transform, for example) you want to use the message_filters::ApproximateTimeSynchronizer (from the message_filters package) in a filter node to collect the messages you need, pass them to the filter callback, then send them to your service for whatever calculation you need to do, then send an answer back to your caller. Your caller can then publish that response to a topic for something else to subscribe to, keeping your pipeline nice, clean, and relatively non-blocking.
Your nodes look like this:
approximate_time_filter_node:
subscribes to topic1
subscribes to topic2
calls back received_topic1_and_topic2 with the messages in order of the passed subscribers when messages are published on topic1 and topic2 within a passed "slop" time
received_topic1_and_topic2
publishes the service response message (or some part of it) to the service_response topic by sending the callback arguments topic1_message and topic2_message to my_service
service_response_subscriber
subscribes to service_response and does something with the passed messages
Once topic1
and topic2
publish messages within a certain amount of time of each other, call callback received_topic1_and_topic2
with the arguments topic1_msg
and topic2_message
. They will be passed to the callback as arguments in the order that their Subscribers are passed to the filter node. Here's a python code snippet that shows how that looks:
ApproximateTimeSynchronizer([
Subscriber("repeated _dilution_of_precision", DilutionOfPrecision),
Subscriber("position_relative_north_east_down",
PositionRelativeNorthEastDown),
Subscriber("velocity_north_east_down", VelocityNorthEastDown),
Subscriber("imu_data", Imu)
], 10, 0.2).registerCallback(transform_and_publish)
Here 10 is the message queue size and 0.2 is the slop time in seconds, meaning that I want to call my callback when messages arrive on these two topics within 0.2 seconds of each other.
In received_topic1_and_topic2
you'll want to waitforservice(your_service_name).
You will call your service using a ServiceProxy
instance. Once you get your response, you should publish it to some other topic for use elsewhere in the system. In python, your callback will look like this:
def approximate_time_callback(topic1_message, topic2_message):
rospy.wait_for_service('add_two_ints')
add_two_ints = rospy.ServiceProxy('add_two_ints', AddTwoInts)
try:
resp1 = add_two_ints(x, y)
publisher.publish(resp1)
except rospy.ServiceException as exc:
print("Service did not process request: " + str(exc))
Assume you have set up a publisher for use in your filter node initialization code.
Depending on what you mean by "save", you may also want to look at the topic_tools utilities and at the Cache message filter. If your use case is persisting data from your topics to disk, you might want to look at using the message_filters synchronizer filters to collect a bunch of messages from disparate topics and a service wrapping rosbag calls to write the messages to disk, then a service to read the written topics to disk.