ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | ros @ Robotics Stack Exchange
Ask Your Question

Revision history [back]

I understand your question is about thermal images, but I will use some information of depth images to provide the explanation, so please bear with me.

There is a very good tutorial from StereoLabs that explains how depth information is visualized:

If you use the Image plugin, since the depth data are published on topics of type sensor_msgs/Image it differs from a “normal” image in that the data is encoded in 32-bit (floating point) not 8-bit.

The parameters are the same as for Image, with three additions:

Normalize range: Since a floating point image is not directly rendered, it is converted to an 8-bit grayscale image. Enabling this field means the normalization range is automatically calculated

Min value: If Normalize range is unchecked, you can manually set the minimum depth range in meters

Max value: If Normalize range is unchecked, you can manually set the maximum depth range in meters Manually setting the normalization range is useful if you know the maximum value measured for the depth and you want to keep the image scale static

Further in REP 118: https://www.ros.org/reps/rep-0118.html

Depth images are published as sensor_msgs/Image encoded as 32-bit float. Each pixel is a depth (along the camera Z axis) in meters.

Further:

Alternatively, a device driver may publish depth images encoded as 16-bit unsigned integer, where each pixel is depth in millimeters. This differs from the standard units recommended in REP 103.

I believe the same is happening with your thermal images as they also subscribing to Image plugin, take a look at this example of thermal sensor:

thermal_image_view (sensor_msgs/Image)
color encoded image to be displayed with e.g. image_view
visible_image_view (sensor_msgs/Image)
RGB image from visible channel (bi-spectral technology necessary, only PI200/PI230)

And corresponding parameters:

palette (int, default: 6)
coloring palette in the range of 1..12
paletteScaling (int, default: 2)
scaling method for color conversion (determination of temperature bounds for high contrast coloring, 1=manual, 2=min/max, 3=1sigma, 4=3sigma)
temperatureMin (int, default: 20)
minimum value of temperature range (if manual scaling method chosen)
temperatureMax (int, default: 40)
maximum value of temperature range (if manual scaling method chosen)

http://wiki.ros.org/optris_drivers

I understand your question is about thermal images, but I will use some information of depth images to provide the explanation, so please bear with me.

There is a very good tutorial from StereoLabs that explains how depth information is visualized:

If you use the Image plugin, since the depth data are published on topics of type sensor_msgs/Image it differs from a “normal” image in that the data is encoded in 32-bit (floating point) not 8-bit.

The parameters are the same as for Image, with three additions:

Normalize range: Since a floating point image is not directly rendered, it is converted to an 8-bit grayscale image. Enabling this field means the normalization range is automatically calculated

Min value: If Normalize range is unchecked, you can manually set the minimum depth range in meters

Max value: If Normalize range is unchecked, you can manually set the maximum depth range in meters Manually setting the normalization range is useful if you know the maximum value measured for the depth and you want to keep the image scale static

Further in REP 118: https://www.ros.org/reps/rep-0118.html

Depth images are published as sensor_msgs/Image encoded as 32-bit float. Each pixel is a depth (along the camera Z axis) in meters.

Further:

Alternatively, a device driver may publish depth images encoded as 16-bit unsigned integer, where each pixel is depth in millimeters. This differs from the standard units recommended in REP 103.

I believe the same is happening with your thermal images as they also subscribing to Image plugin, take a look at this example of thermal sensor:sensor: http://wiki.ros.org/optris_drivers

thermal_image_view (sensor_msgs/Image)
color encoded image to be displayed with e.g. image_view
visible_image_view (sensor_msgs/Image)
RGB image from visible channel (bi-spectral technology necessary, only PI200/PI230)

And corresponding parameters:

palette (int, default: 6)
coloring palette in the range of 1..12
paletteScaling (int, default: 2)
scaling method for color conversion (determination of temperature bounds for high contrast coloring, 1=manual, 2=min/max, 3=1sigma, 4=3sigma)
temperatureMin (int, default: 20)
minimum value of temperature range (if manual scaling method chosen)
temperatureMax (int, default: 40)
maximum value of temperature range (if manual scaling method chosen)

http://wiki.ros.org/optris_drivers

I understand your question is about thermal images, but I will use some information of depth images to provide the explanation, so please bear with me.

There is a very good tutorial tutorial from StereoLabs that explains how depth information is visualized:

If you use the Image plugin, since the depth data are published on topics of type sensor_msgs/Image it differs from a “normal” image in that the data is encoded in 32-bit (floating point) not 8-bit.

The parameters are the same as for Image, with three additions:

Normalize range: Since a floating point image is not directly rendered, it is converted to an 8-bit grayscale image. Enabling this field means the normalization range is automatically calculated

Min value: If Normalize range is unchecked, you can manually set the minimum depth range in meters

Max value: If Normalize range is unchecked, you can manually set the maximum depth range in meters Manually setting the normalization range is useful if you know the maximum value measured for the depth and you want to keep the image scale static

Further in REP 118: https://www.ros.org/reps/rep-0118.html

Depth images are published as sensor_msgs/Image encoded as 32-bit float. Each pixel is a depth (along the camera Z axis) in meters.

Further:

Alternatively, a device driver may publish depth images encoded as 16-bit unsigned integer, where each pixel is depth in millimeters. This differs from the standard units recommended in REP 103.

I believe the same is happening with your thermal images as they also subscribing to Image plugin, take a look at this example of thermal sensor: http://wiki.ros.org/optris_drivers

thermal_image_view (sensor_msgs/Image)
color encoded image to be displayed with e.g. image_view
visible_image_view (sensor_msgs/Image)
RGB image from visible channel (bi-spectral technology necessary, only PI200/PI230)

And corresponding parameters:

palette (int, default: 6)
coloring palette in the range of 1..12
paletteScaling (int, default: 2)
scaling method for color conversion (determination of temperature bounds for high contrast coloring, 1=manual, 2=min/max, 3=1sigma, 4=3sigma)
temperatureMin (int, default: 20)
minimum value of temperature range (if manual scaling method chosen)
temperatureMax (int, default: 40)
maximum value of temperature range (if manual scaling method chosen)

I understand your question is about thermal images, but I will use some information of depth images to provide the explanation, so please bear with me.

There is a very good tutorial from StereoLabs that explains how depth information is visualized:

If you use the Image plugin, since the depth data are published on topics of type sensor_msgs/Image it differs from a “normal” image in that the data is encoded in 32-bit (floating point) not 8-bit.

The parameters are the same as for Image, with three additions:

Normalize range: Since a floating point image is not directly rendered, it is converted to an 8-bit grayscale image. Enabling this field means the normalization range is automatically calculated

Min value: If Normalize range is unchecked, you can manually set the minimum depth range in meters

Max value: If Normalize range is unchecked, you can manually set the maximum depth range in meters Manually setting the normalization range is useful if you know the maximum value measured for the depth and you want to keep the image scale static

Further in REP 118: https://www.ros.org/reps/rep-0118.html

Depth images are published as sensor_msgs/Image encoded as 32-bit float. Each pixel is a depth (along the camera Z axis) in meters.

Further:Moreover:

Alternatively, a device driver may publish depth images encoded as 16-bit unsigned integer, where each pixel is depth in millimeters. This differs from the standard units recommended in REP 103.

I believe the same is happening with your thermal images as they also subscribing to Image plugin, take a look at this example of thermal sensor: http://wiki.ros.org/optris_drivers

thermal_image_view (sensor_msgs/Image)
color encoded image to be displayed with e.g. image_view
visible_image_view (sensor_msgs/Image)
RGB image from visible channel (bi-spectral technology necessary, only PI200/PI230)

And corresponding parameters:

palette (int, default: 6)
coloring palette in the range of 1..12
paletteScaling (int, default: 2)
scaling method for color conversion (determination of temperature bounds for high contrast coloring, 1=manual, 2=min/max, 3=1sigma, 4=3sigma)
temperatureMin (int, default: 20)
minimum value of temperature range (if manual scaling method chosen)
temperatureMax (int, default: 40)
maximum value of temperature range (if manual scaling method chosen)