Remix.run Logo
phillipseamore 4 days ago

There's plenty of text sources for this information, a model doesn't have to see anything. We have sea/land masks in GeoJSON files etc.

N0YS 2 days ago | parent | next [-]

Completely agree with the text sources. I think the key question is whether LLMs are actually reasoning about geography or just memorizing coordinate patterns from training data.

Going with the hypothesis that their "geographic knowledge" simply reflects coordinate density in text (more populated areas → more mentions → better land prediction). If true, plotting all Wikipedia coordinates should correlate with the "LLM world map" here.

This is exactly what you see here, if plotting all GCS co-ordinates extracted from Wikipedia: https://github.com/Magnushhoie/Wikipedia_Coordinates_Visuali...

This implies that larger models aren't necessarily "smarter" at geography - they just have bigger memorized datasets.

redindian75 4 days ago | parent | prev [-]

I think he used it as a metaphor, not literally 'how do we see the earth'