ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

According to the definition here, it is "Full row length in bytes".

There are multiple points that make this definition necessary: - This format supports different image types. One pixel could consists of one byte (mono8), three byte (rgb8), three doubles, ... . - For performance reasons the individual rows of your image might be aligned with special memory boundaries (e.g., word boundaries). In memory your (7*7) image might look like this: IIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIX Here the Is are image bytes and the Xs are unused space bytes.

The image step gives you the distance in bytes between the first element of one row and the first element of the next row. You can see the openCV definition, that is similar (called withStep there) here.

So, if the image coming from your cam is stored in memory without the mentioned unused space you can set image.step to image.cols * number_of_channels * sizeof(datatype_used) and everything should be fine. In the easiest case (mono8) image.step equals image.cols.

According to the definition here, it is "Full row length in bytes".

There are multiple points that make this definition necessary: - necessary:

  1. This format supports different image types. One pixel could consists of one byte (mono8), three byte (rgb8), three doubles, ... . - .
  2. For performance reasons the individual rows of your image might be aligned with special memory boundaries (e.g., word boundaries). In memory your (7*7) image might look like this: IIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIX this: "IIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIX" Here the Is are image bytes and the Xs are unused space bytes.

The image step gives you the distance in bytes between the first element of one row and the first element of the next row. You can see the openCV definition, that is similar (called withStep there) here.

So, if the image coming from your cam is stored in memory without the mentioned unused space you can set image.step to image.cols * number_of_channels * sizeof(datatype_used) and everything should be fine. In the easiest case (mono8) image.step equals image.cols.

According to the definition here, it is "Full row length in bytes".

There are multiple points that make this definition necessary:

  1. This format supports different image types. One pixel could consists of one byte (mono8), three byte (rgb8), three doubles, ... .
  2. For performance reasons the individual rows of your image might be aligned with special memory boundaries (e.g., word boundaries). In memory your (7*7) image might look like this: "IIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIXIIIIIIIX" Here the Is are image bytes and the Xs are unused space bytes.

The image step gives you the distance in bytes between the first element of one row and the first element of the next row. You can see the openCV definition, that is similar (called withStep widthStep there) here.

So, if the image coming from your cam is stored in memory without the mentioned unused space you can set image.step to image.cols * number_of_channels * sizeof(datatype_used) and everything should be fine. In the easiest case (mono8) image.step equals image.cols.