This is not all inclusive answer because the subject it's a very broad and well researched.
Excluding hardware demands as this another broad subject, I will just give you some guidance to start your research.
For example ORB_SLAM2 is well established and popular ROS community.
ORB_SLAM2: ROS implementation of the
ORB-SLAM2 real-time SLAM library for
Monocular, Stereo and RGB-D cameras
that computes the camera trajectory
and a sparse 3D reconstruction (in the
stereo and RGB-D case with true
scale). It is able to detect loops and
relocalize the camera in real time.
Reference: https://github.com/appliedAI-Initiati...
To compare the performance against others, you can read ORB-SLAM: a Versatile and Accurate Monocular SLAM System goes in depth about the particulars of benefits, benchmarks and results
http://webdiis.unizar.es/~raulmur/Mur...
If you visit KITTI Vision Benchmark Suite:
You will find hundreds of implementations, with papers and many algorithms that optimize for particular requirements.
http://www.cvlibs.net/datasets/kitti/...
From this you can use KITTI dataset to test your algorithm and compare performance to others as well.
All these implementations can be integrated with ROS.
Other implementation to consider is Open VSLAM: https://arxiv.org/pdf/1910.01122.pdf
And more recent and extremely promising ORB_SLAM3: https://github.com/UZ-SLAMLab/ORB_SLAM3