ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
1

Python is Killed while reading a bag file using read_messages

asked 2019-03-05 13:10:39 -0500

nbro gravatar image

updated 2019-09-20 06:12:34 -0500

I'm trying to read a .bag file (which has about 150 MB of size) using the function read_messages. I added a print to my loop and I can see that a lot of messages are read, but then, all of a sudden, the script terminates with a "Killed" message, where the error message (retrieved using echo $? from the terminal) is 137, which means that some other process killed my script. So, why would this happen? How can I solve this?

See here the specific part of the code I am using which is responsible for reading the bag file.

I have installed ROS Melodic on a VM (both with VirtualBox and VMWare Fusion 11). I am afraid that this could be a memory problem, but I am not sure.

edit retag flag offensive close merge delete

Comments

Did you have any visualization_msg/MarkerArray messages in your bag? I ran into an out-of-memory issue reading bags with python until I excluded using topics with those (regular Marker is suspect too). (I'd like to track it down and get the issue fixed but may not get to it)

lucasw gravatar image lucasw  ( 2020-01-24 17:20:34 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2019-03-05 14:40:39 -0500

gvdhoorn gravatar image

updated 2019-03-06 01:56:26 -0500

I am afraid that this could be a memory problem, but I am not sure.

could well be. The code you link to seems to be appending to lists, which would increase in size linearly with the nr of items in it. Is the bag very large?

You could use htop or top to check memory usage, and you could check the syslog (or dmesg) on your system to see whether your process is being killed because of memory usage.


Edit:

@gvdhoorn See my edit regarding the size of the file.

That's not excessive, but it could be that the in-memory representation of the msgs that you're deserialising is significantly larger than the msgs themselves, which could exhaust memory faster than you'd expect.

I'd still check syslog and/or (h)top for memory usage and kernel log for OOM kills.


Edit 2:

@gvdhoorn Yes, I guess it went out of memory. Using dmesg, I see one of the last entries: "_Out of memory: Kill process 3368 (python) score 713 or sacrifice child_".

Yep, that's the OOM killer.

You'll have to give your VM more memory, use a smaller bag file or change your implementation.

edit flag offensive delete link more

Comments

@gvdhoorn See my edit regarding the size of the file.

nbro gravatar image nbro  ( 2019-03-05 14:48:50 -0500 )edit

@gvdhoorn Yes, I guess it went out of memory. Using dmesg, I see one of the last entries: "Out of memory: Kill process 3368 (python) score 713 or sacrifice child".

nbro gravatar image nbro  ( 2019-03-05 15:21:26 -0500 )edit

So was this in the end a memory problem?

The OOM killer popping up seems to strongly suggest it is/was.

gvdhoorn gravatar image gvdhoorn  ( 2019-09-20 06:24:00 -0500 )edit

Yes, I think it is a memory problem. In fact, while using top, I see that the percentage of memory usage keeps on increasing until the Python script is killed.

nbro gravatar image nbro  ( 2019-09-20 10:31:24 -0500 )edit

Then would you consider your question as answered?

gvdhoorn gravatar image gvdhoorn  ( 2019-09-20 10:38:41 -0500 )edit

No, because I still need to understand how to solve this problem. The workaround of recording a very small bag file is not practical, given that I will eventually need to collect a sufficiently large bag file with a sufficient number of messages, so that to create a dataset. I already have dedicated about 5GB of RAM to the VM, and I still have this problem. It might not be practical to dedicate more RAM, so this solution is not very practical. The last solution is to change the implementation, but I am still not sure how to do it. If you want to help me with that, I would appreciate!

nbro gravatar image nbro  ( 2019-09-20 10:43:14 -0500 )edit

The code you linked is appending objects to a list.

The only way to not run out of memory is to not keep all the objects around.

I have not checked what your code does, but you might be able to work with a sliding window of a fixed size. That would be the typical way to work around memory exhaustion issues.

Unfortunately data processing tasks require quite a bit of memory. Sooner or later you are going to run into problems with that.

gvdhoorn gravatar image gvdhoorn  ( 2019-09-20 10:45:12 -0500 )edit

I found out that this problem was occurring because I was running the code in a virtual environment, where I had installed several required dependencies manually (that is, with pip install -r requirements.txt). I tried the exact the same code with the same VM configurations but without any virtual environment, and the script is not killed anymore (while parsing the same .bag file). However, to see if the script would be killed with a bigger file, I tried to collect a bigger file (500MB), and now, rather than the script being killed, the Python script raises an exception MemoryError. Why this behavior? Maybe the versions of the Python packages installed in the virtual environment are not the appropriate ones, in the sense that they might not be compatible with the other ROS packages or system dependencies handled by e.g. apt or resdep? Or maybe is this a problem ...(more)

nbro gravatar image nbro  ( 2019-09-22 18:58:52 -0500 )edit

Question Tools

1 follower

Stats

Asked: 2019-03-05 13:10:39 -0500

Seen: 1,508 times

Last updated: Sep 20 '19