# Revision history [back]

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <memory>
#include <geometry_msgs/TwistStamped.h>
#include <boost/pool/pool_alloc.hpp>

int main(int argc, char** argv)
{
using MsgType = geometry_msgs::TwistStamped;
using AllocatorType = boost::pool_allocator<MsgType>;

AllocatorType twist_stamped_allocator;

auto msg = std::allocate_shared<MsgType,AllocatorType>(twist_stamped_allocator);

return 0;
}


Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <memory>
#include <geometry_msgs/TwistStamped.h>
#include <boost/pool/pool_alloc.hpp>

int main(int argc, char** argv)
{
using MsgType = geometry_msgs::TwistStamped;
using AllocatorType = boost::pool_allocator<MsgType>;

AllocatorType boost::pool_allocator<geometry_msgs::TwistStamped> twist_stamped_allocator;

auto msg = std::allocate_shared<MsgType,AllocatorType>(twist_stamped_allocator);
std::allocate_shared<geometry_msgs::TwistStamped,boost::pool_allocator<geometry_msgs::TwistStamped>>(twist_stamped_allocator);

return 0;
}


Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.

running.

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <memory>
#include <geometry_msgs/TwistStamped.h>
#include <boost/pool/pool_alloc.hpp>

int main(int argc, char** argv)
{
boost::pool_allocator<geometry_msgs::TwistStamped> twist_stamped_allocator;
boost::pool_allocator<std::shared_ptr<geometry_msgs::TwistStamped>> twist_stamped_allocator;
auto msg = std::allocate_shared<geometry_msgs::TwistStamped,boost::pool_allocator<geometry_msgs::TwistStamped>>(twist_stamped_allocator);

return 0;
}


Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <memory>
<boost/pool/pool_alloc.hpp>
#include <geometry_msgs/TwistStamped.h>
#include <boost/pool/pool_alloc.hpp>
<memory>

template <typename T>
auto make_message(T &allocator)
-> std::shared_ptr<typename T::value_type::element_type> {
return std::allocate_shared<typename T::value_type::element_type, T>(
allocator);
}

int main(int argc, char** argv)
char **argv) {
boost::pool_allocator<std::shared_ptr<geometry_msgs::TwistStamped>> twist_stamped_allocator;   boost::fast_pool_allocator<std::shared_ptr<geometry_msgs::TwistStamped>>
twist_stamped_allocator;

auto msg = std::allocate_shared<geometry_msgs::TwistStamped,boost::pool_allocator<geometry_msgs::TwistStamped>>(twist_stamped_allocator);
make_message(twist_stamped_allocator);

return 0;
}


Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <boost/pool/pool_alloc.hpp>
#include <geometry_msgs/TwistStamped.h>
#include <memory>

template <typename T>
auto make_message(T &allocator)
make_shared_from_pool(T &pool_allocator)
-> std::shared_ptr<typename T::value_type::element_type> {
return std::allocate_shared<typename T::value_type::element_type, T>(
allocator);
pool_allocator);
}

int main(int argc, char **argv) {
boost::fast_pool_allocator<std::shared_ptr<geometry_msgs::TwistStamped>>
twist_stamped_allocator;

auto msg = make_message(twist_stamped_allocator);
make_shared_from_pool(twist_stamped_allocator);

std::cout << *msg << std::endl;

return 0;
}


Note: Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <boost/pool/pool_alloc.hpp>
#include <geometry_msgs/TwistStamped.h>
#include <memory>

template <typename T>
auto make_shared_from_pool(T &pool_allocator)
-> std::shared_ptr<typename T::value_type::element_type> {
return std::allocate_shared<typename T::value_type::element_type, T>(
pool_allocator);
}

int main(int argc, char **argv) {
boost::fast_pool_allocator<std::shared_ptr<geometry_msgs::TwistStamped>>
twist_stamped_allocator;

auto msg = make_shared_from_pool(twist_stamped_allocator);

std::cout << *msg << std::endl;

return 0;
}


Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <boost/pool/pool_alloc.hpp>
#include <geometry_msgs/TwistStamped.h>
#include <memory>

template <typename T>
auto make_shared_from_pool(T &pool_allocator)
make_shared_from_pool()
-> std::shared_ptr<typename T::value_type::element_type> boost::shared_ptr<T> {
using allocator_t = boost::fast_pool_allocator<boost::shared_ptr<T>>;
return std::allocate_shared<typename T::value_type::element_type, T>(
pool_allocator);
boost::allocate_shared<T, allocator_t>(allocator_t());
}

int main(int argc, char **argv) {
boost::fast_pool_allocator<std::shared_ptr<geometry_msgs::TwistStamped>>
twist_stamped_allocator;

auto msg = make_shared_from_pool(twist_stamped_allocator);
make_shared_from_pool<geometry_msgs::TwistStamped>();

std::cout << *msg << std::endl;

return 0;
}


Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.

In case anyone comes back here looking for this I got what I wanted with boost memory pools. Here is a tiny code example creating my message that I would then publish (and would get automagically handled by a memory pool that allocates in blocks that are sized as a multiple of the message type+control_block for the shared ptr):

#include <boost/pool/pool_alloc.hpp>
#include <geometry_msgs/TwistStamped.h>
#include <memory>

template <typename T>
auto make_shared_from_pool()
-> boost::shared_ptr<T> make_shared_from_pool() {
using allocator_t = boost::fast_pool_allocator<boost::shared_ptr<T>>;
return boost::allocate_shared<T, allocator_t>(allocator_t());
}

int main(int argc, char **argv) {
auto msg = make_shared_from_pool<geometry_msgs::TwistStamped>();

std::cout << *msg << std::endl;

return 0;
}


Note: This does not solve the problem of how does the memory for variable-sized arrays inside a ROS message gets allocated (those are still normal vectors that use normal new). It just solves the issue of the message itself, which is fine if the message only includes PODs like TwistStamped.

I discovered this cool tool for printing whenever malloc is called that helped troubleshoot this: https://github.com/samsk/log-malloc2

One other thing to note is that doesn't completely get rid of malloc calls in the critical path, it just greatly reduces them and so long as I roughly free one message for everyone I create in a short period of time (what happens here) I shouldn't get any more calls to malloc while the system is running.