ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Service call failed: service [] responded with an error: error processing request: I/O operation on closed file

asked 2016-07-22 04:19:58 -0500

highWaters gravatar image

updated 2016-07-24 04:33:55 -0500

Hello,

I have a bunch of nodes running with ROS which call services from one another. Moreover I have each node write to a file general data to keep track of what's going on. I have one file for the main thread, and then 1 file for a callback function, and 1 file for the threads which handle service requests. For the latter I use a lock, so only one thread can write at a time. In general it seems to be working fine, however there are moments it breaks and gives me the following error: ValueError: I/O operation on closed file. Somehow the file gets closed? I'm not sure what might be causing the issue or how to debug this? Any suggestions would be appreciated.

EDIT - code

#!/usr/bin/env python

  --some imports

class Agent:

    def __init__(self, delta, energy, ID, x, y, theta, red, green, blue, state, depend_nr, popSize, provaNr):

        --declaration of some attributes
           .........
        self.lock = Lock()

        filename = 'RESULT/pop_size.'+str(popSize) +'/prova.'+str(provaNr)+'/stdout_' + str(ID) + '_' + str(delta) +'_'+ str(depend_nr)
        self.stdout_log = open(filename, 'w')

        filename2 = 'RESULT/pop_size.'+str(popSize) +'/prova.'+str(provaNr)+'/stdout_callback' + str(ID) + '_' + str(delta) +'_'+ str(depend_nr)

        self.stdout_log_callback = open(filename2, 'w')

        filename3 = 'RESULT/pop_size.'+str(popSize) +'/prova.'+str(provaNr)+'/stdout_handle' + str(ID) + '_' + str(delta) +'_'+ str(depend_nr)
        self.stdout_log_handle = open(filename3, 'w')

        self.message = Message_Type()
        self.message.id = self.about_me_wm.ID
        self.message.rank = 10
        self.message.group = 1
        self.message.content = ''
        for x in self.active_servs:
            self.message.content = self.message.content + str(x[0]) + '|' 
        self.message.timestamp = time.strftime('%X', time.gmtime())

        ## ROS Topics and Services ################################################
        ###########################################################################
        rospy.init_node('agent', anonymous=True)

        myservice = '/agent' + str(ID) + '/serve'
        srv = rospy.Service(myservice, Service_One, self.handle_serve)

        self.publish_global = rospy.Publisher('msgs', Message_Type, queue_size=200)
        rospy.Subscriber('trigger', Message_Type, self.callback)

        rospy.sleep(10)
        ###########################################################################

    ## ROS Callbacks ##################################################################################
    ###################################################################################################
    def handle_serve(self, request):

        idx = -1

        self.lock.acquire()
        --modify some global vars

        self.stdout_log_handle.write('[handle_serve ' + str(local_handle) + '] request.incoming: ' + request.incoming + '\n')
        self.stdout_log_handle.write('[handle_serve ' + str(local_handle) + '] request.id: ' + str(request.id) + '\n')

        print 'Receiving request from: %d, for task: %d \n Current client: %d' % (request.id, self.service_req[idx], self.current_client[idx])
        self.stdout_log_handle.write('[handle_serve ' + str(local_handle) + '] ' + 'Receiving request from: %d, for task: %d. Current client: %d\n' % (request.id, self.service_req[idx], self.current_client[idx]))

        timeout = time.time() + 30 # stop loop 20 sec from now

        print 'service_resp: %s' % str(self.service_resp[idx])
        self.stdout_log_handle.write('[handle_serve ' + str(local_handle) + '] service_resp: %s\n' % str(self.service_resp[idx]))
        self.lock.release()

        while not self.service_resp[idx]:
            time.sleep(0.1)

            if time.time() > timeout:
                print 'timeout'
                self.timeouts = self.timeouts + 1
                self.service_resp_content[idx] = -1

                self.lock.acquire()
                self.stdout_log_handle.write('[handle_serve ' + str(local_handle) + '] timeout\n')
                self.lock.release()
                break

        outgoing = str(self.service_resp_content[idx])
        print 'request outgoing ' + outgoing
        self.lock.acquire()
        self.stdout_log_handle.write('[handle_serve ' + str(local_handle) + '] request outgoing ' + outgoing + '\n')
        self.lock.release()

        return outgoing

    def call_serve(self, server, myid, request, anyone_index):
        other_service = '/robot' + str ...
(more)
edit retag flag offensive close merge delete

Comments

Example code would help.

Chrissi gravatar image Chrissi  ( 2016-07-22 15:46:29 -0500 )edit

You're right, I shall do that.

highWaters gravatar image highWaters  ( 2016-07-23 17:26:08 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2016-07-24 04:56:56 -0500

Chrissi gravatar image

updated 2016-07-25 03:47:48 -0500

I don't really see why the file should be closed but python does a lot of automatic closing when the object handle is reassigned or the garbage collection kicks in. I also noticed, that you do not call close() on the files ever which is bad practice and might lead to the files being left open and or data being lost after the node dies. In general I would open and close the files before and after writing to them. This is as simple as:

def write_to_file(self, file_name, data):
    with open(file_name, 'a') as f:
        f.write(data)

The with statement makes sure that the file is closed once it runs out of scope. With the 'a' you set the file to append and will just put new entries in the bottom. This not only accomplishes that you won't get that error any more but also makes sure that all your data is flushed to the file before your node dies.

As a hotfix for your code, you could just add:

if self.stdout_log_callback.closed():
    self.stdout_log_callback.open("put file name here", 'a')
self.stdout_log_callback.write('[callback ' + str(self.callback_bc) + '][ROSPY] I am: %d, I heard %d\n' % (self.about_me_wm.ID, data.id))

But I would advice to use the with solution.

EDIT:

Opening and closing the files all the time of course takes longer than just writing. If speed is an issue, you might want to keep the files open all the time. A solution would be to check if the file is open before you try to write and open it if it is not. This might work:

def __init__(self, ...):
    ...
    self.my_file = [open("filename.txt", 'w'), "filename.txt"]
    ...
    rospy.on_shutdown(self.shutdown_hook)

def callback(self, msg):
    self.write_to_file(self.my_file, str(msg))

def write_to_file(self, f, data):
    try:
        f[0].write(data)
    except ValueError:
        f[0] = open(f[1], 'a')
        self.write_to_file(f, data)

def shutdown_hook(self):
    self.my_file.close()

Make sure that all the files are closed in the shutdown_hook which is invoked by ros when your node receives the signal to shutdown via rospy.on_shutdown(self.shutdown_hook). Because if you do not close them, buffered data might be lost because it is not flushed to disk.

edit flag offensive delete link more

Comments

Thank you for your detailed response. I will go with the with solution. :) However, I wasn't sure whether it was good practice to use on every write. Can it be too costly?

highWaters gravatar image highWaters  ( 2016-07-24 14:15:16 -0500 )edit

Yes, opening and closing takes additional time. See edit for possible alternative.

Chrissi gravatar image Chrissi  ( 2016-07-25 03:48:19 -0500 )edit

Thank you, at the time being though I'll go with the with solution. I edited my code and there are no more errors of that type.

highWaters gravatar image highWaters  ( 2016-07-27 12:24:21 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2016-07-22 04:19:58 -0500

Seen: 3,274 times

Last updated: Jul 25 '16