Robotics StackExchange | Archived questions

How to extract uncompressed image from rosbag using cv2 and storing them directly to s3?

Hi all I am able to extract uncompressed images from rosbag but the image size is in kb and the original size is in mb.I am storing them directly to s3 without storing them on local file system with the help of io.BytesIO().How to get the original size loaded into s3? Following is the code I tried.

method 1 (python APIs)

import cv2
from cv_bridge import CvBridge
import rosbag
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import io
import pyrosbag
import pandas as pd
import numpy as np
from PIL import Image
from io import BytesIO
topics_uncompressed=['/camera/image_color']
bfile='/bag/bag.bag'
bag = rosbag.Bag(bfile)

for i in range(len(topics_uncompressed)):
     bridge = CvBridge()
     count = 0
     for topic, msg, t in bag.read_messages(topics=topics_uncompressed[i]):
         cv_img = bridge.imgmsg_to_cv2(msg,desired_encoding="passthrough")
         im=Image.fromarray(cv_img)
         in_memory_file = io.BytesIO()
         im.save(in_memory_file,'JPEG')
         in_memory_file.seek(0)
         s3.Bucket('img').put_object(Key=str(count),Body=in_memory_file)
         count+=1

method 2 (applying color using cv2 for compressed uncolored images)

import cv2
from cv_bridge import CvBridge
import rosbag
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import io
import pyrosbag
import pandas as pd
import numpy as np
from PIL import Image
from io import BytesIO

topics_uncompressed=['/camera/image_color/compressed']
bfile='/bag/bag.bag'
bag = rosbag.Bag(bfile)

for i in range(len(topics_uncompressed)):
      bridge = CvBridge()
      count = 0
      for topic, msg, t in bag.read_messages(topics=topics_uncompressed[i]):
          a = np.fromstring(msg.data, dtype=np.uint8)
          img = a.reshape(1600,1200,3)
          img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
          in_memory_file = io.BytesIO()
          Image.fromarray(img).save(in_memory_file,'JPEG')
          in_memory_file.seek(0)
          s3.Bucket('img1').put_object(Key=str(count),Body=in_memory_file)
          count+=1

Asked by mukilan on 2019-06-12 03:57:22 UTC

Comments

You're reading uncompressed images from the bag file but you're then writing them in JPEG format to a memory file. JPEG is a compressed format, using most common default settings it will reduce the file size by at least ten times. So we would expect the memory file to be much smaller than the original.

Why do you need the copy to be the same size? Are you concerned with loss of quality?

Asked by PeteBlackerThe3rd on 2019-06-12 05:26:12 UTC

thank you PeteBlackerThe3rd for your input but the quality should be the same.Can you suggest any workaround for the problem.

Asked by mukilan on 2019-06-19 03:40:43 UTC

Answers

The solution here will depend on exactly what your problem is. You're reading an uncompressed image from a bag file and then saving it into a memory file using an image file format JPEG in the case of your code above.

You said you want the original size image loaded into s3. The important question is why. Are you concerned about loss of image quality from a lossy image compression format, or is it necessary that the memory file takes up the same number of bytes as the uncompressed image?

If you're concerned about loss of image quality I would recommending simply switching to a lossless format such as PNG. This will still compress your image in most cases but exactly the same pixel data can be recovered afterwards.

If you're concerned that the memory file should be exactly the same size as the uncompressed image, then you could just write the raw buffer of the numpy.array to the file.

Hope this helps.

Asked by PeteBlackerThe3rd on 2019-06-19 07:05:54 UTC

Comments