▲ | PaulHoule 3 days ago | |
It's a 10,000 word rant at least but... - I've looked at the abyss of bankruptcy from server bills. I actually think it's a hell of a lot worse than Rachel does, I thought that 15 years ago, and I've suffered worse - The whole discussion around RSS has been naive in every respect from the very beginning, for instance Dave Winer thinking we care about him publishing a list of the tunes he listens to back when you couldn't actually listen to them (I'll grant that in the age of Apple Music, Spotify, YouTube in such things may have caught up with him.) There are never any systems thinkers just people who see it all through a keyhole and can't imagine at all that anyone else sees it another way. - To use a Star Wars analogy, Google is the evil empire that blew up up our planet with a Deathstar 10 years ago and now we're living on the wreckage in an asteroid belt. I see the AI upstarts as The Rebel Alliance that at the very least reset the enshittification cycle back to the beginning and create some badly needed competition. Opponents to A.I. crawlers are brainwashed/living in the matrix and defacto defending Google's monopoly and slamming the door to prevent the exits from enshittification that A.I. agents offer (e.g. they look at the content and drop out the ads!) They think they're the rebel alliance but are really the stormtroopers suppressing the resistance until the Deathstar charges up for the next shot. - Rachel's supposedly some high powered systems programmer or sysadmin or something but is just as naive as Winer. We're supposed to "just" use a cache. Funny, the reason why the web won out of every other communication protocol is that you can "just" use curl. curl by it's nature can't support cacheing because it's a simple program that just runs in one process and if you wanted to have a cache you'd have to have a complex locked data structure which would force a bunch of trade offs... like I am downloading something over my slow ADSL connection that takes two hours but I also need to clear my cache so do I force the download to abort or hang up the script that needs the cache cleared for two hours? curl is "cheap and cheerful" because you just don't have to deal with all the wacky problems for which "clear the cache" or "disable the cache" clears up like a can of Ubik. But in the age of "Vibe Coding" solutions that almost work are all the rage, except when you finally realize they didn't actually work you clear your cache and rerun and BAM you got banned by Rachel's blog because you hit it twice in 24 hours. Web browsers are some of the most complicated software in existence, ahead of compilers (they contain compilers) and system runtimes (they are a system runtime), right up there with operating systems (less having to know things that aren't in the datasheet to write device drivers at least.) For the first 15 years you could not trust the cache if you were writing web applications, somewhere around 2010's I finally realized you could take it for granted the cache works right. I guess implementations and the standards improved over time but all this complexity is some of the reason why it is just Google's world and we all live in it and there are just two big techs that can make a browser engine and one unaccountable and out-of-touch foundation. So I wish Rachel would just find an answer to the cost problems (Cloudflare R2?) or give up on publishing RSS or advocate activitypub rather than assume we care what she says enough to follow her rules for her blog without seriously confronting what a system-wide solution for the problems RSS tries to solve and the problems it poses. |