| ▲ | reconnecting 11 hours ago |
| I have bad news for you: LLMs are not reading llms.txt nor AGENTS.md files from servers. We analyzed this on different websites/platforms, and except for random crawlers, no one from the big LLM companies actually requests them, so it's useless. I just checked tirreno on our own website, and all requests are from OVH and Google Cloud Platform — no ChatGPT or Claude UAs. |
|
| ▲ | michaelcampbell 7 hours ago | parent | next [-] |
| I also wonder; it's a normal scraper mechanism doing the scraping, right? Not necessarily an LLM in the first place so the wholesale data-sucking isn't going "read" the file even if it IS accessed? Or is this file meant to be "read" by an LLM long after the entire site has been scraped? |
| |
| ▲ | hamdingers 4 hours ago | parent | next [-] | | Yes. It's a basic scraper that fetches the document, parses it for URLs using regex, then fetches all those, repeat forever. I've done honeypot tests with links in html comments, links in javascript comments, routes that only appear in robots.txt, etc. All of them get hit. | | |
| ▲ | dumbfounder an hour ago | parent [-] | | We need to update robots.txt for the LLM world, help them find things more efficiently (or not at all I guess). Provide specs for actions that can be taken. Etc. |
| |
| ▲ | reconnecting 7 hours ago | parent | prev | next [-] | | Absolutely. I assume that there are data brokers, or AI companies themselves, that are constantly scraping the entire internet through non-AI crawlers and then processing data in some way to use it in the learning process. But even through this process, there are no significant requests for LLMs.txt to consider that someone actually uses it. | |
| ▲ | giancarlostoro 5 hours ago | parent | prev [-] | | I think it depends. LLMs now can look up things on the fly to bypass the whole "this model was last updated in December 2025" issue of having dated information. I've literally told Claude before to look up something after it accused me of making up fake news. |
|
|
| ▲ | hiccuphippo 3 hours ago | parent | prev | next [-] |
| I wonder if the crawlers are pretending to be something else to avoid getting blocked. I see Bun (which was bought by Anthropic) has all its documentation in llms.txt[0]. They should know if Claude uses it or wouldn't waste the effort in building this. [0] https://bun.sh/llms.txt |
| |
| ▲ | CognitiveLens 22 minutes ago | parent | next [-] | | As a project that started with a lot of idealism about how software _should_ be built, I would totally expect Bun to have an llms.txt file even if Claude wasn't using it. It's a project that is motivated in part by leading by example. | |
| ▲ | nozzlegear 20 minutes ago | parent | prev | next [-] | | Did they do that before they were bought by Anthropic? Perhaps it's just part of a CI process that nobody's going to take an axe to without good reason. | |
| ▲ | reconnecting 2 hours ago | parent | prev [-] | | I also noticed this LLMs.txt at bun.sh, so for me it looks like some sort of advertising. |
|
|
| ▲ | cardanome 10 hours ago | parent | prev | next [-] |
| Best way fight back is to create a tarpit that will feed them garbage: https://iocaine.madhouse-project.org/ |
| |
|
| ▲ | jph00 2 hours ago | parent | prev | next [-] |
| llms.txt files have nothing to do with crawlers or big LLM companies. They are for individual client agents to use. I have my clients set up to always use them when they’re available, and since I did that they’ve been way faster and more token efficient when using sites that have llms.txt files. So I can absolutely assure you that LLM clients are reading them, because I use that myself every day. |
| |
| ▲ | reconnecting 2 hours ago | parent [-] | | Thanks for the clarification. >for use in LLMs such as Claude (1) From your website, it seems to me that LLMs.txt is addressed to all LLMs such as Claude, not just 'individual client agents' . Claude never touched LLMs.txt on my servers, hence the confusion. 1. https://llmstxt.org |
|
|
| ▲ | chrisjj an hour ago | parent | prev | next [-] |
| Doesn't sound like bad news to me. Anything that reduces the load impact of the plagaristic parrots is a good thing, surely. |
|
| ▲ | whazor 10 hours ago | parent | prev | next [-] |
| what if you add a <!-- see /llms.txt --> to every .html |
| |
| ▲ | reconnecting 10 hours ago | parent [-] | | Actually, I noticed an interesting behaviour in LLMs. We had made a docs website generator (1) that works with HTML (2) FRAMESET and tried to parse it with Claude. Result: Claude doesn't see the content that comes from FRAMESET pages, as it doesn't parse FRAMEs. So I assume what they're using is more or less a parser based on whole-page rendering and not on source reading (including comments). Perhaps, this is an option to avoid LLM crawlers: use FRAMEs! 1. https://github.com/tirrenotechnologies/hellodocs 2. https://www.tirreno.com/hellodocs/ | | |
| ▲ | rep_lodsb 4 hours ago | parent [-] | | With the WWW, from here on out and especially in multimedia WWW applications, frames are your friend. Use them always. Get good at framing. That is wisdom from Gary. The problem most website designer have is that they do not recognize that the WWW, at its core, is framed. Pages are frames. As we want to better link pages, then we must frame these pages. Since you are not framing pages, then my pages, or anybody else's pages will interfere with your code (even when the people tell you that it can be locked - that is a lie). Sections in a single html page cannot be locked. Pages read in frames can be. Therefore, the solution to this specific technical problem, and every technical problem that you will have in the future with multimedia, is framing. Frames securely mediate, by design. Secure multi-mediation is the future of all webbing. |
|
|
|
| ▲ | cactusplant7374 an hour ago | parent | prev | next [-] |
| It sounds really expensive to run inference as a crawler. |
|
| ▲ | giancarlostoro 5 hours ago | parent | prev | next [-] |
| If they run across a blog post pointing to it, they might. Did you test that? Edit: Someone else pointed out, these are probably scrapers for the most part, not necessarily the LLM directly. |
|
| ▲ | gooob 3 hours ago | parent | prev | next [-] |
| wait why not robots.txt? |
| |
| ▲ | reconnecting 3 hours ago | parent [-] | | Good question, at least OAI-SearchBot is hitting robots.txt. I assume the real issue is that what overloads the servers like security bots, SEO crawlers, and data companies — are the ones that don't respect robots.txt in full, but they wouldn't respect LLMs.txt either. |
|
|
| ▲ | Sharlin 6 hours ago | parent | prev | next [-] |
| You could insert the message on every single webpage you serve, hidden visually and from screenreaders. |
|
| ▲ | alterom 28 minutes ago | parent | prev | next [-] |
| >I have bad news for you: LLMs are not reading llms.txt ...Which is why this is posted as blog post. They'll scrape and read that. |
|
| ▲ | GaggiX 10 hours ago | parent | prev [-] |
| This is meant for openclaw agents, you are not gonna see a ChatGPT or Claude User-Agent. That's why they show it in a normal blog page and not just as /llms.txt |
| |
| ▲ | reconnecting 10 hours ago | parent [-] | | In tirreno (our product), we catch every resource request on the server side, including LLMs.txt and agents.md, to get the IP that requested it and the UA. What I've seen from ASNs is that visits are coming from GOOGLE-CLOUD-PLATFORM (not from Google itself), and OVH. Based on UA, users are: WebPageTest, BuiltWith, and zero LLMs based on both ASN and UA. 1. https://github.com/tirrenotechnologies/tirreno | | |
| ▲ | GaggiX 10 hours ago | parent [-] | | Openclaw agents use the same browser and ASN that me and you use, also the llms.txt (as shown) is displayed as a normal blog page so it can be discover by the agents without having to fetch /llms.txt at random. | | |
| ▲ | reconnecting 10 hours ago | parent [-] | | When I look at LLMs.txt, I see every request and there are no ASNs from residential networks or browsers UA. | | |
| ▲ | GaggiX 10 hours ago | parent [-] | | For the third time I'm telling you on Anna’s Archive they have displayed the llms.txt as a standard blog page, not hidden in /llms.txt, so that agents can notice it without having to fetch /llms.txt at random. That's why it's meant for openclaw agents and not openai/anthropic crawlers. | | |
| ▲ | supermatt 9 hours ago | parent | next [-] | | I don’t understand your reasoning. Are you suggesting that openclaw will magically infer a blog post url instead? Or that openclaw will traverse the blog of every site regardless of intent? Anyway, AA do provide it as a text file at /llms.txt, no idea why you think it is a blog post, or how that makes it better for openclaw. | | |
| ▲ | GaggiX 8 hours ago | parent [-] | | >AA do provide it as a text file at /llms.txt, no idea why you think it is a blog post It's a blog post, it's shown as the first item in Anna’s Blog right now, and as I said in my first comment it's also available as /llms.txt >Are you suggesting that openclaw will magically infer a blog post url instead? Or that openclaw will traverse the blog of every site regardless of intent? If an openclaw decide to navigate AA it would see the post (as it is shown in the homepage) and decide to read it as it called "If you’re an LLM, please read this'. |
| |
| ▲ | reconnecting 9 hours ago | parent | prev [-] | | My point is about LLM crawlers specifically. | | |
| ▲ | PathfinderBot 9 hours ago | parent [-] | | LLM crawlers aren't really a thing, at least not in the "they have agency over what they're crawling and read what they crawl" way. |
|
|
|
|
|
|