| ▲ | chasd00 2 hours ago | |
i replied as much to a sibling comment but i think this is a way to wiggle out of robots.txt, identifying user agent strings, and other traditional ways for sites to filter for a bot. | ||
| ▲ | lukev an hour ago | parent | next [-] | |
Right but those things exist to prevent bots. Which this is. So at this point we're talking about participating in the (very old) arms race between scrapers & content providers. If enough people want agents, then services should (or will) provide agent-compatible APIs. The video round-trip remains stupid from a whole-system perspective. | ||
| ▲ | mvdtnz 38 minutes ago | parent | prev [-] | |
I mean if they want to "wriggle out" of robots.txt they can just ignore it. It's entirely voluntary. | ||