▲ | allg12 4 days ago | ||||||||||||||||
Hey karussell, I really appreciate all the hard work you’ve put into Graphhopper. I wouldn't be able to create this project without GH. I have a question about memory usage during the import stage (specifically in the OSM Reader's preprocessRelations function). I'm using a HashMap<Long, List<Long>> to map way IDs to OSM bike route relation IDs, which means allocating lots of arrays. Could this be causing me to run out of heap memory faster or am I off base here? I thought I would be able to compute the graph with 64GB of ram but it kept crushing before CH and LM stage. After switching to a 128GB instance, it finally worked, hitting around 90GB at peak memory usage. For context, I was using 3 profiles - one with CH and two with LM, plus elevation data and used all of the tips from deploy.md | |||||||||||||||||
▲ | giamma 3 days ago | parent | next [-] | ||||||||||||||||
Love your project! Maybe you already considered, but there are a number of collection libraries out there that are optimized for holding Java primitives and/or for very large sets of data, which could help you save significant memory. Eclipse Collections [0] and Fastutil [1] come to mind first, but there are many out there [2] [0] https://github.com/eclipse-collections/eclipse-collections [1] https://fastutil.di.unimi.it/ [2] https://github.com/carrotsearch/hppc/blob/master/ALTERNATIVE... | |||||||||||||||||
| |||||||||||||||||
▲ | karussell 3 days ago | parent | prev [-] | ||||||||||||||||
> Could this be causing me to run out of heap memory faster Yes, definitely. > I thought I would be able to compute the graph with 64GB of ram but it kept crushing before CH and LM stage. For normal GraphHopper and just the EU the 64GB should be more than sufficient. |