Visualising positions of a detected object in Rviz using find-object-2d
I'm using find-object to detect an object together with a Kinect 2 and I'm trying to display the trajectory or points that the object has moved within a room using RVIZ. I just want to draw lines between the last known position of the object and it's current position. I modified tfexamplenode.cpp but I'm having a few problems.
My Issues are:
I'm using pose.GetRotation() as suggested to find the position of the object in 3D space but the line markers aren't being shown in rviz. A quick rostopic echo shows that visualization_marker isn't publishing anything. Am I doing this right here?
Also, I get a warning saying could not find a connection between "map" and "object" as they are not part of the same tree. I'm trying to visualise the movement within a fixed frame /map. How would I achieve this?
This is my code
#include <ros/ros.h>
#include <tf/transform_listener.h>
#include <find_object_2d/ObjectsStamped.h>
#include <QtCore/QString>
#include <visualization_msgs/Marker.h>
#include <cmath>
class TfExample
{
public:
TfExample() :
mapFrameId_("/map"),
objFramePrefix_("object")
{
ros::NodeHandle pnh("~");
pnh.param("map_frame_id", mapFrameId_, mapFrameId_);
pnh.param("object_prefix", objFramePrefix_, objFramePrefix_);
ros::NodeHandle nh;
subs_ = nh.subscribe("objectsStamped", 1, &TfExample::objectsDetectedCallback, this);
ros::NodeHandle n;
marker_pub = n.advertise<visualization_msgs::Marker>("visualization_marker", 10);
}
// Here I synchronize with the ObjectsStamped topic to
// know when the TF is ready and for which objects
void objectsDetectedCallback(const find_object_2d::ObjectsStampedConstPtr & msg)
{
ros::Rate loop_rate(30);
if(msg->objects.data.size())
{
for(unsigned int i=0; i<msg->objects.data.size(); i+=12)
{
// get data
int id = (int)msg->objects.data[i];
std::string objectFrameId = QString("%1_%2").arg(objFramePrefix_.c_str()).arg(id).toStdString(); // "object_1", "object_2"
points.header.frame_id = line_strip.header.frame_id = line_list.header.frame_id = QString("/object_%1").arg(id).toStdString();
points.header.stamp = line_strip.header.stamp = line_list.header.stamp = ros::Time::now();
points.ns = line_strip.ns = line_list.ns = "points_and_lines";
points.action = line_strip.action = line_list.action = visualization_msgs::Marker::ADD;
points.pose.orientation.w = line_strip.pose.orientation.w = line_list.pose.orientation.w = 1.0;
tf::StampedTransform pose;
tf::StampedTransform poseCam;
try
{
// Get transformation from "object_#" frame to target frame "map"
// The timestamp matches the one sent over TF
tfListener_.lookupTransform(mapFrameId_, objectFrameId, msg->header.stamp, pose);
tfListener_.lookupTransform(msg->header.frame_id, objectFrameId, msg->header.stamp, poseCam);
}
catch(tf::TransformException & ex)
{
ROS_WARN("%s",ex.what());
continue;
}
// Here "pose" is the position of the object "id" in "/map" frame.
ROS_INFO("Object_%d [x,y,z] [x,y,z,w] in \"%s\" frame: [%f,%f,%f] [%f,%f,%f,%f]",
id, mapFrameId_.c_str(),
pose.getOrigin().x(), pose.getOrigin().y(), pose.getOrigin().z(),
pose.getRotation().x(), pose.getRotation().y(), pose.getRotation().z(), pose.getRotation().w());
ROS_INFO("Object_%d [x,y,z] [x,y,z,w] in \"%s\" frame: [%f,%f,%f] [%f,%f,%f,%f]",
id, msg->header.frame_id.c_str(),
poseCam.getOrigin().x(), poseCam.getOrigin().y(), poseCam.getOrigin().z(),
poseCam.getRotation().x(), poseCam.getRotation().y(), poseCam.getRotation().z(), poseCam.getRotation().w());
points.id = 0;
line_strip.id = 1;
line_list.id = 2;
points.type = visualization_msgs::Marker::POINTS;
line_strip.type = visualization_msgs::Marker::LINE_STRIP;
line_list.type = visualization_msgs::Marker::LINE_LIST;
// POINTS markers use x and y scale for width/height respectively
points.scale.x = 0.2;
points.scale.y = 0.2;
// LINE_STRIP/LINE_LIST markers use only the x component of scale, for the line width
line_strip.scale.x = 0.1;
line_list.scale.x = 0.1;
// Points are green
points.color.g = 1.0f;
points.color.a = 1.0;
// Line strip is blue
line_strip.color.b = 1.0;
line_strip.color.a = 1.0;
// Line list is red
line_list.color.r = 1.0;
line_list.color.a = 1.0;
geometry_msgs::Point p;
p.x = pose.getRotation().x();
p.y = pose.getRotation().y();
p.z = pose.getRotation().z();
points.points.push_back(p);
line_strip.points.push_back(p);
// marker_pub.publish(points);
marker_pub.publish(line_strip);
//marker_pub.publish(line_list);
loop_rate.sleep();
f += 0.04;
}
}
}
private:
std::string mapFrameId_;
std::string objFramePrefix_;
ros::Subscriber subs_;
ros::Publisher marker_pub;
tf::TransformListener tfListener_;
visualization_msgs::Marker points, line_strip, line_list;
float f;
};
int main(int argc, char * argv[])
{
ros::init(argc, argv, "tf_example_node");
TfExample sync;
ros::spin();
}
Asked by cluelessnigerian on 2017-02-20 08:33:27 UTC
Comments
For the second error, what is your TF tree? If you just want the pose according to camera, you may set
map_frame_id:=kinect2_base_link
.Asked by matlabbe on 2017-02-23 14:37:20 UTC
@matlabbe, I wasn't publishing tf when running the kinect bridge. Silly error really. I also managed to solve my first question. Thanks for all your help
Asked by cluelessnigerian on 2017-02-23 19:15:59 UTC
Hello @cluelessnigerian....I have similar problem like you at the moment. How did you solve the two questions?`??? Please i need your help...thanks.
Asked by Buldoza on 2019-03-12 03:59:48 UTC