How to tf transform a processed data from sensors

asked 2021-07-29 06:37:43 -0500

Anjulo gravatar image

Hello everyone,

Sorry if this question is an easy one. I'm still learning more about ROS.

So, as the title specifies, I would like to transform a sensor reading that has been processed using a call-back function. So, the reading comes from a camera --> gets processed in the callback function and I get an interim result. I would like to transform this interim result from camera_frame to map frame using tf2. I'm not sure how to do that.

#include blah blah ...

void positionCallBack(const gb_visual_detection_3d_msgs::BoundingBoxes3d Boxes3d)
{
      .
      .
      // process the reading from "/darknet_ros_3d/bounding_boxes"  --> i have done this
      // i would like to transform the result from the above step ( from camera_frame to map_frame)
      .
      .
}
int main(int argc, char **argv){
      .
      .
      .
      ros::Subscriber sub1 = nh.subscribe("/darknet_ros_3d/bounding_boxes", 1, positionCallBack );
     .
     . 
     .
ros:spin();
return 0;   
}

I was reading this tutorial but it doesn't specify how to do it for two messages that should be synchronous. I didn't understand it that much too.

So, I would appreciate any help. An illustration using a snippet of code would be very helpful.

Thanks in advance!

edit retag flag offensive close merge delete

Comments

Sorry, I don't get the question. bounding_boxes seems to be a custom message type giving xmin, xmax, ymin, .... So lets assume you have calculate a pose (x,y,z,angles as quaternion) from that. You have probably have the resulting coordinate as geometry_msgs/Pose. What prevents you from calling

buffer_.transform(*point_ptr, point_out, target_frame_);

as in the linked tutorial?

Humpelstilzchen gravatar image Humpelstilzchen  ( 2021-07-29 07:59:13 -0500 )edit