ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question
0

Copy sensor_msgs/Image data to Ogre Texture

asked 2021-03-23 13:16:40 -0500

pietroro gravatar image

updated 2021-03-24 08:23:10 -0500

Hello everyone. I have been trying to manually copy the contents of an image message to a dynamic Ogre Texture buffer. I want to display the images of a certain topic to a plane in the RViz viewport.

I am aware of both the rviz_textured_quads solution by lucasw and the ROSImageTexture class. However they both have unsatisfactory performance. The first is just inefficient, the second has an extra conversion that I am hoping I can skip.

I successfully managed to achieve what I wanted by first converting the Image message to a cv_bridge::CvImagePtr. However I was wondering whether or not I could pull the image data directly from the image message.

Here is my definition of the Texture:

    image_texture_ = texture_manager.createManual("DynamicImageTexture",
    resource_group_name, 
    Ogre::TEX_TYPE_2D,
    width_, height_, 0, 
    Ogre::PF_BYTE_RGB, 
    Ogre::TU_DYNAMIC_WRITE_ONLY_DISCARDABLE);

And here is my update function:

    Ogre::HardwarePixelBufferSharedPtr buffer = image_texture_->getBuffer(0,0);
    buffer->lock(Ogre::HardwareBuffer::HBL_DISCARD);
    const Ogre::PixelBox &pb = buffer->getCurrentLock();
    // Obtain a pointer to the texture's buffer
    uint8_t *data = static_cast<uint8_t*>(pb.data);
    // Copy the data
    memcpy(
        (uint8_t*)pb.data,
        &image_ptr_->data[0],
        sizeof(uint8_t)*image_ptr_->data.size()
        );
    buffer->unlock();

Is there something wrong I'm doing in copying the data? For reference, here are the results.

Copying the data from the CV Pointer:

CV Pointer

And the image buffer directly:

buffer

edit retag flag offensive close merge delete

Comments

Please do not link to off-site image hosters. Images tend to disappear that way.

Attach your screenshots to your question directly. I've give you sufficient karma.

gvdhoorn gravatar image gvdhoorn  ( 2021-03-24 07:53:48 -0500 )edit

Updated, thanks a lot for the permissions.

pietroro gravatar image pietroro  ( 2021-03-24 08:23:28 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2021-03-24 07:52:57 -0500

gvdhoorn gravatar image

updated 2021-03-24 07:54:38 -0500

A sensor_msgs/Image object is not just a POD, and the data member does not just point to a flat array with pixel values necessarily.

From the documentation (here):

Header header

uint32 height         # image height, that is, number of rows
uint32 width          # image width, that is, number of columns

# The legal values for encoding are in file src/image_encodings.cpp
# If you want to standardize a new string format, join
# ros-users@lists.sourceforge.net and send an email proposing a new encoding.

string encoding       # Encoding of pixels -- channel meaning, ordering, size
                      # taken from the list of strings in include/sensor_msgs/image_encodings.h

uint8 is_bigendian    # is this data bigendian?
uint32 step           # Full row length in bytes
uint8[] data          # actual matrix data, size is (step * rows)

(I've removed some of the comments)

At the very least, you'll have to take the encoding into account. step and is_bigendian are also important sometimes.

Much of the functionality in cv_bridge is actually around converting to-and-from sensor_msgs/Image and their equivalent OpenCV types. And in quite a few cases, what happens is format and sometimes colour space conversion. Encodings could specify floating point image formats fi, or anything from binary images to 32bit+alpha.

So I'm not sure you can avoid that, unless your textures happen to accept the encoding used by your sensor_msgs/Images instances.

edit flag offensive delete link more

Comments

Thank you for your reply. I am aware of the fact that certain images are encoded in certain ways. However I still think that by being careful one could technically copy the contents of data directly.

The way I see it there are three things one should be aware of:

  • The encoding of each pixel (for example the Ogre pixel formatPF_R8G8B8: three channels of one byte each, with corresponding sensor_msgs::image_encodings::RGB8 ROS encoding counterpart)
  • The endianness of the single bytes
  • The size of the actual data (both the texture and the image data)

If all three aspects corresponded, I don't see why the buffers should be indicating different data.

I have tried using images that have the same encoding and endianness should not be a problem, as it should be clear if that were the problem.

pietroro gravatar image pietroro  ( 2021-03-24 10:58:57 -0500 )edit

Isn't that basically what ROSImageTexture does (here).

Perhaps you could clarify what's wrong with that approach in your opinion.

gvdhoorn gravatar image gvdhoorn  ( 2021-03-24 11:02:46 -0500 )edit

As you can see in the picture I attached, whatever the image encodings are doing it looks as if the lower third of the image isn't even loading in. Any ideas on why that might be happening?

pietroro gravatar image pietroro  ( 2021-03-24 11:09:38 -0500 )edit

I believe ROSImageTexture performs an unnecessary step in converting the Image message into an Ogre image first, then assigning that ogre image to a texture.

While it is a perfectly valid way to get to that same result, it has a redundant step and it doesn't leverage the nature of Ogre dynamic textures which use the Ogre::TU_DYNAMIC_WRITE_ONLY_DISCARDABLE flag to speed things up internally. Last I measured it was 9 times slower to use the ROSImageTexture as opposed to the OpenCV workaround.

pietroro gravatar image pietroro  ( 2021-03-24 11:15:11 -0500 )edit
1

As you can see in the picture I attached, whatever the image encodings are doing it looks as if the lower third of the image isn't even loading in. Any ideas on why that might be happening?

size of your texture compared to size of the image data? Stride? Encoding?

I'm not saying you need to copy what ROSImageTexture does, but there seems to be quite a bit of detection of encoding and byte-order and mapping that to Ogre equivalents. Whether or not an "unnecessary step" is performed I don't know right now, but what I was trying to get across is, that code is there for a reason.

gvdhoorn gravatar image gvdhoorn  ( 2021-03-25 03:56:15 -0500 )edit

Oh my oh my. I apparently bumped into something I shouldn't have.

Turns out the issue is on the OGRE side of things. While Ogre asks you explicitly what kind of format the pixels should have, it completely disregards this information in favor of an internally defined "standard". Encoding detection, which I was doing, won't matter if Ogre doesn't listen to me when I try to tell it how to read the data.

In my code I ask for an Ogre::PF_BYTE_RGB (or whichever is closest to the sensor_msgs::image_encoding of the Image data) pixel format, but when I query the pixel buffer on its format it tells me it's Ogre::PF_A8R8G8B8, no matter what I specify. This behavior is confirmed here and here.

I may have bitten off more than I can chew. I will drop a question on the OGRE forums to see if ...(more)

pietroro gravatar image pietroro  ( 2021-03-25 05:09:33 -0500 )edit

I will drop a question on the OGRE forums to see if anyone can get back to me.

if/when you do, please post a comment here with a link to your post on the Ogre forums. That way we keep things connected.

In my code I ask for an Ogre::PF_BYTE_RGB (or whichever is closest to the sensor_msgs::image_encoding of the Image data) pixel format, but when I query the pixel buffer on its format it tells me it's Ogre::PF_A8R8G8B8, no matter what I specify. This behavior is confirmed here and here.

could be the code in ROSImageTexture works around this in some way (either intentionally or by accident).

Whatever the reason, you could perhaps post an issue on the RViz issue tracker. Not to report something broken, but to make sure your sleuthing doesn't go unnoticed by future readers.

gvdhoorn gravatar image gvdhoorn  ( 2021-03-25 05:24:33 -0500 )edit

Isn't this the cause of what you're seeing:

when creating an Ogre::Texture (HW based texture), you'll get the closest format the GPU supports that can satisfy what you asked for; thus you can't assume you will get exactly what you requested.

that's from your second link.

That would not really be "Ogre's fault", it's just how rendering systems work.

gvdhoorn gravatar image gvdhoorn  ( 2021-03-25 05:26:50 -0500 )edit

Question Tools

3 followers

Stats

Asked: 2021-03-23 13:15:22 -0500

Seen: 368 times

Last updated: Mar 24 '21