| ▲ | hombre_fatal 3 hours ago | |
In some ways that's true. But when it comes to git repos, an LLM agent like claude code can just clone them for local crawling which is far better than crawling remotely, and it's the "Right Way" for various reasons. Frankly I suspect AI agents will push search in the opposite direction from your comment and move us to distributed cache workflows. These tools just hit the origin because it's the easy solution of today, not because the data needs to be up to date to the millisecond. Imagine a system where all those Fetch(url) invocations interact with a local LRU cache. That'd be really nice, and I think that's where we'd want to go, especially once more and more origin servers try to block automated traffic. | ||