How to extract uncompressed image from rosbag using cv2 and storing them directly to s3?

asked 2019-06-12 03:57:22 -0500

Hi all I am able to extract uncompressed images from rosbag but the image size is in kb and the original size is in mb.I am storing them directly to s3 without storing them on local file system with the help of io.BytesIO().How to get the original size loaded into s3? Following is the code I tried.

method 1 (python APIs)

import cv2
from cv_bridge import CvBridge
import rosbag
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import io
import pyrosbag
import pandas as pd
import numpy as np
from PIL import Image
from io import BytesIO
topics_uncompressed=['/camera/image_color']
bfile='/bag/bag.bag'
bag = rosbag.Bag(bfile)

for i in range(len(topics_uncompressed)):
     bridge = CvBridge()
     count = 0
     for topic, msg, t in bag.read_messages(topics=topics_uncompressed[i]):
         cv_img = bridge.imgmsg_to_cv2(msg,desired_encoding="passthrough")
         im=Image.fromarray(cv_img)
         in_memory_file = io.BytesIO()
         im.save(in_memory_file,'JPEG')
         in_memory_file.seek(0)
         s3.Bucket('img').put_object(Key=str(count),Body=in_memory_file)
         count+=1

method 2 (applying color using cv2 for compressed uncolored images)

import cv2
from cv_bridge import CvBridge
import rosbag
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import io
import pyrosbag
import pandas as pd
import numpy as np
from PIL import Image
from io import BytesIO

topics_uncompressed=['/camera/image_color/compressed']
bfile='/bag/bag.bag'
bag = rosbag.Bag(bfile)

for i in range(len(topics_uncompressed)):
      bridge = CvBridge()
      count = 0
      for topic, msg, t in bag.read_messages(topics=topics_uncompressed[i]):
          a = np.fromstring(msg.data, dtype=np.uint8)
          img = a.reshape(1600,1200,3)
          img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
          in_memory_file = io.BytesIO()
          Image.fromarray(img).save(in_memory_file,'JPEG')
          in_memory_file.seek(0)
          s3.Bucket('img1').put_object(Key=str(count),Body=in_memory_file)
          count+=1
edit retag flag offensive close merge delete

Comments

You're reading uncompressed images from the bag file but you're then writing them in JPEG format to a memory file. JPEG is a compressed format, using most common default settings it will reduce the file size by at least ten times. So we would expect the memory file to be much smaller than the original.

Why do you need the copy to be the same size? Are you concerned with loss of quality?

PeteBlackerThe3rd gravatar imagePeteBlackerThe3rd ( 2019-06-12 05:26:12 -0500 )edit