storing 3d range data for localization
As I understand it, the typical way to use the navigation stack with a laser range finder is this: drive around and collect range data, filling up the voxel grid, then save the 2D occupancy grid. Then the next day, you load this map, and amcl localizes the robot by comparing the new range data to the saved map.
It seems like localization would work better if you saved the 3D data and used that. Why doesn't the navigation stack use a 3d map?