Nodelet manager throws ros::serialization::StreamOverrunException
Hi, I haven't been able to find enough online material related to serialization overruns, so I'm asking here for the first time. Feel free to point out any improvements that I can make to the question in order to fit the forum's rules.
Setup
I'm working with ROS kinetic and pcl-1.9.1, running the ros master in a server and launching the nodes/nodelets in a docker container running on the same machine.
I'm working with 3 KinectOne cameras which run simultaneously for 3D object recognition. My first try was to join the three pointclouds and use a single processing pipeline. As an attempt to speed up the process, i have ported my nodes to nodelets to exploit the zero-copy transport as I work with 540x960 pointclouds. Furthermore, I decided to make three processing pipelines, one for each camera, as I need to maintain the organized structure of the pointclouds for recognition purposes. This multiplied my number of nodelets by three, and now when I run the program I get the following error
terminate called after throwing an instance of ros::serialization::StreamOverrunException
what(): Buffer Overrun
[manager-1] process has died [pid 30560, exit code -6, cmd /opt/ros/kinetic/lib/nodelet/nodelet manager __name:=manager __log:=/root/.ros/log/ebd68120-3f4e-11e9-8235-bc305b9d52e9/manager-1.log].
log file: /root/.ros/log/ebd68120-3f4e-11e9-8235-bc305b9d52e9/manager-1*.log
Problem hints
I've tried to run the program adding one step of the process at a time, and the problem arises when adding the following nodelet to the manager: SurfaceSegmentationNodelet. At this point I have 15 nodelets loaded under the same manager, and my goal is to end up with 21 nodelets under one manager. 2 nodelets are required in order to communicate with each KinectOne, and I have written the other 15, all of which have their own dynamic_reconfigure server, and all of them publish at an approximate rate of 6Hz. All publihsers/subscribers have a queue size of 5. I think that the problem might come from any of the following sources:
- I'm asking too much for a single nodelet manager
- I need to increase the queue_size (I've seen this approach in other answers but as I see it, a queue size of five for a publisher that runs at 6 Hz should be enough)
- Having that many reconfigure servers somewhat overloads the manager's communication
Any help will be appreciated, thank you in advance!
UPDATE
Running only 1 pipeline instead of 3 throws no exception. I guess then that the problem might be that I launch too many nodelets for one single manager. Can anyone confirm that?
UPDATE #2
I ran both the manager and the nodelets under gdb
and got the following output (not showing the memory map). Apparently it does come from one of my SurfaceSegmentationNodelets, but I have little to no clue of what is causing the problem.
[pcl::ExtractIndices::applyFilter] The indices size exceeds the size of the input.
[pcl::ExtractIndices::applyFilter] The indices size exceeds the size of the input.
*** Error in `/root/ws/devel/lib/nodelet/nodelet': free(): invalid size: 0x00007ffdd4000aa0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6( 0x777e5)[0x7ffff5f577e5]
/lib/x86_64-linux-gnu/libc.so.6( 0x8037a)[0x7ffff5f6037a]
/lib/x86_64-linux-gnu/libc.so.6(cfree 0x4c)[0x7ffff5f6453c]
//lib/libpcl_segmentation.so.1.8(pcl::KdTreeFLANN<pcl::PointXYZRGB, flann::L2_Simple<float> >::radiusSearch(pcl::PointXYZRGB const&, double, std::vector<int, std::allocator<int> >&, std::vector<float, std::allocator<float> >&, unsigned int) const 0x283)[0x7ffefb1e8163]
//lib/libpcl_segmentation.so.1.8(void pcl::extractEuclideanClusters<pcl::PointXYZRGB>(pcl::PointCloud<pcl::PointXYZRGB> const&, std::vector<int, std::allocator<int> > const&, boost::shared_ptr<pcl::search::Search<pcl::PointXYZRGB> > const&, float, std::vector<pcl::PointIndices, std::allocator<pcl::PointIndices> >&, unsigned int, unsigned int) 0x4b4)[0x7ffefad26634]
//lib/libpcl_segmentation.so.1.8(pcl::EuclideanClusterExtraction<pcl::PointXYZRGB>::extract(std::vector<pcl::PointIndices, std::allocator<pcl::PointIndices> >&) 0xdf)[0x7ffefad2fe0f]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(SurfaceSegmentation::extractSurfaceClusters() 0x23e)[0x7fffbc6f48f6]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(objrecog_surface_segmentation::SurfaceSegmentationNode::processPlanes() 0x1af)[0x7fffbc6af0dd]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(objrecog_surface_segmentation::SurfaceSegmentationNode::processAndPublish() 0x28)[0x7fffbc6aed4e]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(objrecog_surface_segmentation::SurfaceSegmentationNodelet::loop() 0x4d)[0x7fffbc6a6e7d]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(boost::_mfi::mf0<void, objrecog_surface_segmentation::SurfaceSegmentationNodelet>::operator()(objrecog_surface_segmentation::SurfaceSegmentationNodelet*) const 0x65)[0x7fffbc6ae2dd]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(void boost::_bi::list1<boost::_bi::value<objrecog_surface_segmentation::SurfaceSegmentationNodelet*> >::operator()<boost::_mfi::mf0<void, objrecog_surface_segmentation::SurfaceSegmentationNodelet>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf0<void, objrecog_surface_segmentation::SurfaceSegmentationNodelet>&, boost::_bi::list0&, int) 0x4a)[0x7fffbc6ae240]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(boost::_bi::bind_t<void, boost::_mfi::mf0<void, objrecog_surface_segmentation::SurfaceSegmentationNodelet>, boost::_bi::list1<boost::_bi::value<objrecog_surface_segmentation::SurfaceSegmentationNodelet*> > >::operator()() 0x4a)[0x7fffbc6adf8a]
/root/ws/devel/lib//libSurfaceSegmentationNodelet.so(boost::detail::thread_data<boost::_bi::bind_t<void, boost::_mfi::mf0<void, objrecog_surface_segmentation::SurfaceSegmentationNodelet>, boost::_bi::list1<boost::_bi::value<objrecog_surface_segmentation::SurfaceSegmentationNodelet*> > > >::run() 0x1e)[0x7fffbc6adbd0]
/usr/lib/x86_64-linux-gnu/libboost_thread.so.1.58.0( 0x115d5)[0x7ffff546a5d5]
/lib/x86_64-linux-gnu/libpthread.so.0( 0x76ba)[0x7ffff52436ba]
/lib/x86_64-linux-gnu/libc.so.6(clone 0x6d)[0x7ffff5fe741d]
Code
SurfaceSegmentationNodelet.cpp
#include <nodelet/nodelet.h>
#include <pluginlib/class_list_macros.h>
#include <objrecog_surface_segmentation_nodelet.h>
/** @file */
namespace objrecog_surface_segmentation
{
SurfaceSegmentationNodelet::SurfaceSegmentationNodelet()
{
}
SurfaceSegmentationNodelet::~SurfaceSegmentationNodelet()
{
this->loop_thread->join();
}
void SurfaceSegmentationNodelet::onInit()
{
NODELET_INFO("Initializing objrecog_surface_segmentation nodelet.");
// 1 - Start the node
this->node.reset(new SurfaceSegmentationNode(getMTNodeHandle(), getMTPrivateNodeHandle()));
// 2 - Start the loop thread
this->loop_thread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&SurfaceSegmentationNodelet::loop, this)));
}
void SurfaceSegmentationNodelet::loop()
{
while(ros::ok())
{
// 1 - Check for execution
if (this->node->getFlagsSet())
{
// 2 - Process and publish
this->node->processAndPublish();
// 3 - Reset flags
this->node->resetFlags();
}
}
}
} // namespace objrecog_surface_segmentation
PLUGINLIB_DECLARE_CLASS(objrecog_surface_segmentation, SurfaceSegmentationNodelet, objrecog_surface_segmentation::SurfaceSegmentationNodelet, nodelet::Nodelet)
SurfaceSegmentationNode.cpp
#include <objrecog_surface_segmentation_node.h>
/** @file */
namespace objrecog_surface_segmentation
{
SurfaceSegmentationNode::SurfaceSegmentationNode(ros::NodeHandle nh,
ros::NodeHandle private_nh)
{
// 1 - Set node handles and cam name
this->nh_ = nh;
this->private_nh_ = private_nh;
this->private_nh_.getParam("cam_name", this->cam_);
///////
/** @todo Pass it as an argument somehow */
this->main_topic_group_ = "/objrec";
///////
// 2 - Reset pointers
this->resetPointers();
// 3 - Reset boolean flags
this->resetFlags();
// 4 - Initialize publishers
this->initializePublishers();
// 5 - Initialize subscribers
this->initializeSubscribers();
// 6 - Initialize reconfigure server
this->initializeReconfigureServer();
// 7 - Extra: configure markers
this->output_surfaces_markers_->setNamespace("surfaces_" + this->cam_);
std::vector<double> color = {1.0, 0.0, 0.0, 0.8};
this->output_surfaces_markers_->setColor(color);
// 8 - Extra: set timer label
this->time_.setLabel("S-SEG_" + this->cam_);
}
SurfaceSegmentationNode::~SurfaceSegmentationNode()
{
}
void
SurfaceSegmentationNode::inputPlanesCallback(const scene_msgs::PlaneArray::ConstPtr & planes_msg)
{
if (not this->planes_received_)
{
// 1 - Save planes
this->input_planes_msg_ = planes_msg;
// 2 - Set flag
this->planes_received_ = true;
}
}
void
SurfaceSegmentationNode::inputCloudFullCallback(const pointCloud::ConstPtr & cloud)
{
if (not this->full_cloud_received_)
{
// 1 - Save planes
this->input_cloud_full_ = cloud;
// 2 - Set flag
this->full_cloud_received_ = true;
}
}
void
SurfaceSegmentationNode::inputCloudSubsampledCallback(const pointCloud::ConstPtr & cloud)
{
if (not this->subsampled_cloud_received_)
{
// 1 - Save planes
this->input_cloud_subsampled_ = cloud;
// 2 - Set flag
this->subsampled_cloud_received_ = true;
}
}
bool
SurfaceSegmentationNode::getFlagsSet()
{
return (this->planes_received_ && this->full_cloud_received_ && this->subsampled_cloud_received_);
}
void
SurfaceSegmentationNode::resetFlags()
{
this->planes_received_ = false;
this->full_cloud_received_ = false;
this->subsampled_cloud_received_ = false;
}
void
SurfaceSegmentationNode::processAndPublish()
{
//////////////////
//ROS_INFO_STREAM("[S-SEG-" + this->cam_ + "]: Process");
//this->time_.tic();
//////////////////
// 1 - Process planes
this->processPlanes();
//////////////////
//this->time_.toc();
//this->time_.tic();
//////////////////
// 2 - Generate messages
this->generateSurfacesMsg();
//////////////////
//this->time_.toc();
//this->time_.tic();
//////////////////
// 3 - Publish
this->publishMsgs();
//////////////////
//this->time_.toc();
//this->time_.tic();
//////////////////
// 4 - Reset messages
this->resetMsgs();
//////////////////
this->time_.toc();
this->time_.tic();
//////////////////
}
void
SurfaceSegmentationNode::resetMsgs()
{
this->output_surfaces_msg_ = scene_msgs::SurfaceArray::Ptr(new scene_msgs::SurfaceArray);
this->output_surfaces_markers_->resetMarkers();
this->surface_segmentation_.surface_vector.clear();
}
void
SurfaceSegmentationNode::reconfigure(objrecog_surface_segmentation::SurfaceSegmentationConfig &config,
uint32_t level)
{
// 1 - Reconfigure Surface Segmentation parameters
this->surface_segmentation_.reconfigureSegmentation(config.min_surf_points, config.max_surf_points, config.cluster_tol);
}
void
SurfaceSegmentationNode::processPlanes()
{
// 1 - Pass the plane cloud to the surface extractor
this->surface_segmentation_.setPointCloud(this->input_cloud_subsampled_);
// 2 - Find surfaces for each plane
for(int i=0; i < this->input_planes_msg_->planes.size(); i++)
{
// 3 - Pass the plane indices to the surface extractor
pointIndices::Ptr indices = pointIndices::Ptr(new pointIndices());
pcl_conversions::toPCL(this->input_planes_msg_->planes[i].indices, *indices);
this->surface_segmentation_.setIndices(indices);
// 4 - Copy the plane coefficients' info
for(int j=0; j < 4; j++)
{
this->surface_segmentation_.coefficients->values[j] = this->input_planes_msg_->planes[i].coeff[j];
}
// 5 - Perform the surface extraction procedure
this->surface_segmentation_.extractSurfaceClusters();
}
}
void
SurfaceSegmentationNode::generateSurfacesMsg()
{
// 1 - Add surfaces and markers
for(int i = 0; i < this->surface_segmentation_.surface_vector.size(); i++)
{
// 2 - Add marker and surface
this->output_surfaces_markers_->addMarker(this->surface_segmentation_.surface_vector[i].centroid, this->surface_segmentation_.surface_vector[i].dimension, this->surface_segmentation_.surface_vector[i].orientation);
this->output_surfaces_msg_->surfaces.push_back(this->surface_segmentation_.surface_vector[i].generateSurfMsg());
}
}
void
SurfaceSegmentationNode::publishMsgs()
{
this->marker_publisher_.publish(this->output_surfaces_markers_->getMarkerArray());
this->surface_publisher_.publish(this->output_surfaces_msg_);
this->full_cloud_publisher_.publish(this->input_cloud_full_);
this->subsampled_cloud_publisher_.publish(this->input_cloud_subsampled_);
}
void
SurfaceSegmentationNode::resetPointers()
{
this->input_cloud_full_ = pointCloud::ConstPtr(new pointCloud());
this->input_cloud_subsampled_ = pointCloud::ConstPtr(new pointCloud());
this->input_planes_msg_ = scene_msgs::PlaneArray::ConstPtr(new scene_msgs::PlaneArray);
this->output_surfaces_msg_ = scene_msgs::SurfaceArray::Ptr(new scene_msgs::SurfaceArray);
this->output_surfaces_markers_ = utils_markers::MarkerList::Ptr(new utils_markers::MarkerList());
}
void
SurfaceSegmentationNode::initializePublishers()
{
this->marker_publisher_ = this->nh_.advertise<Markers>(this->main_topic_group_ + "/surfaces/markers_" + this->cam_, 5, true);
this->surface_publisher_ = this->nh_.advertise<scene_msgs::SurfaceArray>(this->main_topic_group_ + "/surfaces/data_" + this->cam_, 5, true);
this->full_cloud_publisher_ = this->nh_.advertise<pointCloud>(this->main_topic_group_ + "/surfaces/full_cloud_" + this->cam_, 5, true);
this->subsampled_cloud_publisher_ = this->nh_.advertise<pointCloud>(this->main_topic_group_ + "/surfaces/sub_cloud_" + this->cam_, 5, true);
}
void
SurfaceSegmentationNode::initializeSubscribers()
{
this->planes_subscriber_ = this->nh_.subscribe<scene_msgs::PlaneArray>(this->main_topic_group_ + "/planes/data_" + this->cam_, 5, boost::bind(&SurfaceSegmentationNode::inputPlanesCallback, this, _1));
this->full_cloud_subscriber_ = this->nh_.subscribe<pointCloud>(this->main_topic_group_ + "/planes/full_cloud_" + this->cam_, 5, boost::bind(&SurfaceSegmentationNode::inputCloudFullCallback, this, _1));
this->subsampled_cloud_subscriber_ = this->nh_.subscribe<pointCloud>(this->main_topic_group_ + "/planes/sub_cloud_" + this->cam_, 5, boost::bind(&SurfaceSegmentationNode::inputCloudSubsampledCallback, this, _1));
}
void
SurfaceSegmentationNode::initializeReconfigureServer()
{
// 1 - Create a node handle to generate a new namespace for the server
ros::NodeHandle reconfigure_nh ("surface_segmentation_" + this->cam_);
// 2 - Create the server
this->server_ = new dynamic_reconfigure::Server<objrecog_surface_segmentation::SurfaceSegmentationConfig>(reconfigure_nh);
// 3 - Set server callback
this->callback_ = boost::bind(&SurfaceSegmentationNode::reconfigure, this, _1, _2);
this->server_->setCallback(this->callback_);
}
} // namespace objrecog_surface_segmentation
Asked by jgallostra on 2019-03-05 10:09:46 UTC
Answers
The ros::serialization::StreamOverrunException
was caused by a bad memory management between nodelets. Thanks to @gvdhoorn, debugging the code showed that the exception was caused by:
- Attempting to access a freed memory location
- Attempting to free an already freed memory location
In the first nodelet of the pipeline I was sending a pointer to a PointCloud and forgot to re-allocate the pointer before modifying its contents. The subsequent nodelets therefore received a pointer to a memory location that had changed since the message was sent, causing the error. To sum up, make sure that you do not modify a sent message when working with nodelets.
EDIT
In my node, I published 4 messages:
void
SurfaceSegmentationNode::publishMsgs()
{
this->marker_publisher_.publish(this->output_surfaces_markers_->getMarkerArray());
this->surface_publisher_.publish(this->output_surfaces_msg_);
this->full_cloud_publisher_.publish(this->input_cloud_full_);
this->subsampled_cloud_publisher_.publish(this->input_cloud_subsampled_);
}
so before creating any new messages to send, I had to add
void
SurfaceSegmentationNode::resetMsgs()
{
this->input_cloud_full_ = pointCloud::ConstPtr(new pointCloud());
this->input_cloud_subsampled_ = pointCloud::ConstPtr(new pointCloud());
this->output_surfaces_msg_ = scene_msgs::SurfaceArray::Ptr(new scene_msgs::SurfaceArray);
this->output_surfaces_markers_->resetMarkers();
this->surface_segmentation_.surface_vector.clear();
}
to allocate new space for the new messages and not mess with the memory. I hope this clarifies what I meant.
Asked by jgallostra on 2019-03-07 11:23:45 UTC
Comments
To sum up, make sure that you do not modify a sent message when working with nodelets.
ehm, that should be impossible, as all msgs are exchanged as ConstPtr
s -- which is what is used to enforce the contract between publishers and subscribers.
I'm slightly confused by this statement:
In the first nodelet of the pipeline I was sending a pointer to a PointCloud [..]
nodelet
s typically just exchange messages, are you saying that you were actually sending around pointers to memory locations? I would not recommend doing that. Any subscriber that is not hosted by the same manager will be accessing "random memory". And that's just one of the potential problems.
Asked by gvdhoorn on 2019-03-07 11:29:26 UTC
Well I thought that in order to make use of the zero-copy transport, nodelet messages should be published as shared pointers (as explained here), and that is what I meant by "sending a pointer to a PointCloud".
As for your first remark, I meant that the nodelet sends a message and after publishing it modifies its contents, without allocating a new shared pointer. This is stated as a practice to avoid in the last paragraph of the link commented above.
Asked by jgallostra on 2019-03-07 11:45:31 UTC
Comments
It would probably be good to know where exactly that exception is thrown. Have you tried running the manager process in
gdb
and then look at the backtrace to see what is going on exactly?Asked by gvdhoorn on 2019-03-05 11:07:24 UTC
@gvdhoorn It seems to come from the suspicious surface segmentation nodelet. However, if I run three managers (one for each camera pipeline), the error does not happen. Any clue?
Asked by jgallostra on 2019-03-06 06:56:27 UTC
Sometimes the error is this one, also related to pointers and memory [pcl::ExtractIndices::applyFilter] The indices size exceeds the size of the input. *** Error in `/root/ws/devel/lib/nodelet/nodelet': munmap_chunk(): invalid pointer: 0x00007ffe18013050 *** ======= Backtrace: =========
Asked by jgallostra on 2019-03-06 07:02:05 UTC
Looks like indexing into some array or vector is not done correctly.
As to the stacktrace: did you build things with
Debug
symbols enabled? I'm not seeing any line nrs.Asked by gvdhoorn on 2019-03-06 07:24:05 UTC
I believe I did enable debugging symbols, as I added
-g
to theCMAKE_CXX_FLAGS
. If it helps, after the memory map I get the following line:But I've read that this is just an issue of libraries compiled in other directories.
Asked by jgallostra on 2019-03-06 08:01:10 UTC
I'm not too worried about "other libraries". It's your own for which it would be convenient to have line nrs. Typically gdb shows those if it can.
Asked by gvdhoorn on 2019-03-06 08:42:04 UTC
Another (unrelated) question btw: why are you running this as
root
?Asked by gvdhoorn on 2019-03-06 08:42:20 UTC
it's likely that this is actually the real issue here. Accessing memory out-of-bounds is a recipe for
SEGFAULT
s. Are you doing any input scaling, or manually setting up arrays/lists/vectors?Asked by gvdhoorn on 2019-03-06 08:43:26 UTC
in CMake projects (which Catkin projects essentially are), this is more easily done by specifying a build type. You can set it like so:
-DCMAKE_BUILD_TYPE=RelWithDebInfo
.I would actually recommend ..
Asked by gvdhoorn on 2019-03-06 08:49:27 UTC
.. doing that and choosing
RelWithDebInfo
, especially with Pointcloud processing, as otherwise things will most likely be way too slow.Asked by gvdhoorn on 2019-03-06 08:49:48 UTC
I'm running it in a docker container and therefore as
root
. I'm just publishing/subscribing to pointclouds and pointindices from pcl, and that's why I use nodelets, due to the zero-copy transport. I think that the error comes fromPCL::EuclideanClusterExtraction
, should I build pcl with-g
?Asked by jgallostra on 2019-03-06 08:50:56 UTC
Seeing as PCL is being used by thousands of people, I would search for bug reports with similar problems as you have. If you can't find those, I'd suspect your own code first (personally I always suspect my own code first).
I see various frames in
SurfaceSegmentationNodelet
in your backtrace, ..Asked by gvdhoorn on 2019-03-06 08:53:15 UTC
.. which would seem to be your class. Building everything with debugging symbols enabled will certainly make debugging easier.
But again, don't add
-g
, just set the CMake build type. CMake will take care of the rest.Asked by gvdhoorn on 2019-03-06 08:54:15 UTC
Again off-topic, but: you could actually use the
USER
instruction to change the runtime context. That's I believe even a best-practice/recommend.Asked by gvdhoorn on 2019-03-06 08:58:40 UTC
Yep, thanks for the help! I'm currently debugging the way you said. I'll look for it in PCL forums for sure, though I suspected it had to do with nodelets, boost pointers and stuff (which of course is an issue of my code). I'll try to fix it and post if I find something relevant.
Asked by jgallostra on 2019-03-06 08:59:28 UTC
Are your messages all in sync (ie: have proper timestamps sufficiently close to be able to correlate msgs temporally)?
if so, instead of using the manual synchronisation that you do now, I'd look at
message_filters
.Another question: why all the copying, and what is the rationale behind the ..
Asked by gvdhoorn on 2019-03-06 09:08:38 UTC
.. flags system?
Asked by gvdhoorn on 2019-03-06 09:10:08 UTC
Well it's my first approach at writing nodelets, that might explain a bit. Each nodelet subscribes to 3 topics and requires that all 3 messages have arrived in order to process the data, hence the flag system. The Nodelet class has an infinite-loop thread that checks the flags and processes the...
Asked by jgallostra on 2019-03-06 09:14:30 UTC
data. I also read somewhere that callbacks should not be computationally heavy as they can block a process, and therefore I decided to copy the input messages for their later processing. Anyway, I thought that with nodelets I'm just copying shared pointers so it shouldn't be a big deal, or is it?
Asked by jgallostra on 2019-03-06 09:15:51 UTC
I mean there is not that much info around on how to properly structure nodelets, so after a lot of browsing I dug down into some libraries that use nodelets to copy their structure, but yeah the "flag" system is mine
Asked by jgallostra on 2019-03-06 09:17:21 UTC
Conceptually you're doing something like message_filters/ApproximateTimeSynchronizer. If your messages all have timestamps, I would perhaps take a look at that. It'd be more elegant.
But first debug your code to make it work.
Asked by gvdhoorn on 2019-03-06 09:18:49 UTC
And to answer your question, each node publishes all its messages at once (sequentially inside a function), so I would guess that yes, they are in sync. Thanks for the
message_filters
suggestion, I'm taking a look at it!Asked by jgallostra on 2019-03-06 09:19:12 UTC
Publishing "at the same time" != timestamps are in-sync. Please understand the difference. Publishing does not set/update a timestamp.
Asked by gvdhoorn on 2019-03-06 09:20:20 UTC
For an example of using
message_filters
innodelet
s, you could take a look at stereo_throttle.cpp inrtabmap_ros
. It clearly shows the synchronised callback.Asked by gvdhoorn on 2019-03-06 09:21:04 UTC
Or any of the nodelets of
image_pipeline
of course, such as stereo_image_proc/nodelets/point_cloud2.cpp.Asked by gvdhoorn on 2019-03-06 09:22:41 UTC
Alternative to the original
message_filters
: fkie/message_filters.Asked by gvdhoorn on 2019-03-06 09:23:23 UTC
Thanks a lot! Yep, I see my mistake with the timestamps. Thanks again!
Asked by jgallostra on 2019-03-06 09:27:26 UTC