| ▲ | Olshansky 4 hours ago | ||||||||||||||||
Resurfacing a proposal I put out on llms-txt: https://github.com/AnswerDotAI/llms-txt/issues/88 We should add optional `tips` addresses in llms.txt files. We're also working on enabling and solving this at Grove.city. Human <-> Agent <-> Human Tips don't account for all the edge cases, but they're a necessary and happy neutral medium. Moving fast. Would love to share more with the community. Wrote about it here: https://x.com/olshansky/status/2008282844624216293 | |||||||||||||||||
| ▲ | ricardo81 4 hours ago | parent | next [-] | ||||||||||||||||
I like the idea, (original) content creators being credited is good for the entire ecosystem. Though if LLMs are willingly ignoring robots.txt, often hiding themselves or using third party scraped data- are they going to pay? | |||||||||||||||||
| ▲ | bediger4000 4 hours ago | parent | prev [-] | ||||||||||||||||
At this point, it's pretty clear that the AI scrapers won't be limited by any voluntary restrictions. Bytedance never seemed to live with robots.txt limitations, and I think at least some of the others didn't either. I can't see this working. | |||||||||||||||||
| |||||||||||||||||