Ask Your Question
1

Managing large video streams (4K ...)

asked 2017-05-17 10:52:57 -0500

erkil1452 gravatar image

Hi, We want to equip our car mounted system with recording capability for high-quality video streams, possibly several Full HD and maybe 4K cameras. We need frames from those cameras to be recorded with a timestamps synchronized with other sensors (GPS,...) and to be stored at reasonable framerate (15-30 FPS). What is generally a good approach for implementing this?

Currently we have 4 Full HD webcams in the system connected to a laptop through a single USB 2.0 port and we already face serious problems. The USB capacity prevents us to set the full resolution (ROS complains on startup of the nodes) so we have to run at 720p and 15 fps. Even with this, we only record around 7 fps on our Core i7 laptop. I suspect that the on-the-fly compression to jpeg (frames are stored compressed) can be the problem. I do not see this scaling very well. Do we need multiple USB controllers, multiple PCs, LAN based cameras (e.g. PointGrey)? Or is recording frames as JPEG images instead of a video codec (e.g. H264) even a good practice? It seems very wasteful to me. Is a more reasonable approach to record each stream as a separate video using an external SW (not ROS) and just note the time of the first frame? Would timing for the following frames be reliable? We can tolerate error of 500 ms over recording duration of 1 hour.

Thanks!

edit retag flag offensive close merge delete

Comments

I have no real experience with this, but if you're not processing your video in real-time / while capturing it, I would try not to use ROS for the capture. As you already wrote, there are other ways to capture video streams, which are probably more efficient than storing individual raw / jpeg shots in a directory / bag file. The 'only' problem would indeed seem to be the sync between your video streams and other sensor data. If your capture system encodes a (wall) clock into your container then you should be able to sync everything using that. A small node that then decodes the video stream and uses the time code as the ROS timestamp could be an option. Or -- and probably more efficient -- use the embedded time codes to convert your video containers into bag files. That would all be off-line, so should reduce problems with resource usage that ...(more)

gvdhoorn gravatar image gvdhoorn  ( 2017-05-18 01:11:22 -0500 )edit

gvdhoorn: Yes, that is the most likely path we are going to take. For our later application we need random access to frames anyway so it would then make sense to extract images as separate files and skip the bags completely.

erkil1452 gravatar image erkil1452  ( 2017-05-18 12:32:17 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2017-05-18 11:33:48 -0500

lucasw gravatar image

I've run into the same issue but my application was tolerant of having a reduced frame rate so I haven't really solved it, but have thought about some approaches:

If the camera can output mjpg directly then it should be possible for a ros node to grab that stream and reformat into the ros jpg format without cpu consuming decompression and re-compression. I don't know if any nodes can do that now. The quality of the time stamping is an unknown in that case.

What is the minimally expensive/sized/power-consuming embedded board that can connect to a single usb camera and capture and compress at full resolution and frame rate? Every camera could be paired with an embedded board. It would be nice to have gigabit ethernet, so uncompressed output would also be an option, that rules out the Pi and similar boards. The original Pi is not powerful enough, I don't know about the 3- and if the board costs more than $100 then the aggregate price starts to approach that of a small low cost gige camera (which won't produce compressed output, but can be very good quality, have roi functions, and can have different lenses, though not autofocus or powered zoom without additional cost and complexity...).

The nice thing about the consumer usb cameras is that they can be good quality for the price, the economy of scale of skype/video-chat helps that.

Or go with the industrial gigabit ethernet cameras, but also pair them each with embedded boards (which need to have gigabit ethernet) for compression- but the cost here starts to get high. The gigE cameras are going to provide the bests solution to synchronization (you can have digital pulse signals sent to each camera to trigger frames, or possibly record in the camera the time of arrival of a lower rate sync pulse), but your 500ms seems so large that it doesn't require that.

Another route is the network camera which is usually aimed at the security market, which are sort of the reverse of the gige camera in that they might have only compressed output but no full quality raw images. Interchangeable lenses are possible too, and they are lower cost. I think there are some standard data stream formats meant for dvrs. I've seen some ros driver support but I haven't looked into it much.

It would be great to hear from anyone else already doing any of this rather than going into a lengthy integration r&d effort to select systems and pair them and test them out.

What I have actually done is throw additional computers at the problem, allocating different tasks among them- separate out compression and recording and cpu intensive real-time critical stuff onto different systems as much as possible (though issues with network bandwidth can arise).

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2017-05-17 10:52:57 -0500

Seen: 607 times

Last updated: May 18 '17