It seems that you are running 2 SLAM algorithms in parallel which is a bit confusing, but you could still try one of the following :
1) The output of SLAM is {map+global pose estimate} so you actually have 2 different maps with 2 different global pose estimates. There is no trivial way to fuse all these directly AFAIK. I guess if you are not interested in fusing the maps, you could take one of them as a reference and then just fuse the global pose estimates through an EKF/UKF via the robot_localization
package.
2) If one of these 2 SLAM algorithms was using odometry as an input and you already had a source of odometry (e.g wheel odometry, visual odometry...) you could try to use the other SLAM algorithm to provide an alternative odometry estimate, and then fuse it through an EKF/UKF with your existing odometry. Then give the "improved" (?) odometry estimate to your second SLAM algorithm.
For instance, hector_slam
can provide such an odometry estimate via the undocumented pub_odometry
parameter, see this question
The problem here is that none of the two SLAM algorithms you mentioned - I am less familiar with PTAM though - use any source of odometry...
You could try to modify hector_slam
to use odometry but that requires some work, see this question. As for PTAM, I don't know.
Are you using these 2 SLAM algorithms to estimate motion increments (local pose estimation) or to estimate a "fused map"+pose within the fused map (map+global pose estimation) ?
hector_slam is building up a map as it goes along and giving out that map and a pose estimate. The PTAM slam is working out increments I think as it gets its key points when it starts up and then goes from there, (it also uses control inputs for the drone for the pose estimate)