| ▲ | schoen 5 hours ago |
| I was a great admirer (and later friend) of Barlow, and I'm still very deeply influenced by the Declaration and many adjacent phenomena. I agree with some fraction of this post in terms of seeing many people shelving these principles when it gets inconvenient for them. In the past few months, I've been troubled by one specific part of the Declaration, in the final paragraph: > We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before. Specifically, I think the cyberspace civilization, to the extent that it exists, has been a failure lately on "humane" in the broad sense. The author of the linked post might say that this has to do with the need for moderation (indeed this is a big surprise from the 1996 point of view, as there were still unmoderated Usenet groups that people used regularly and enthusiastically, and spam was a recent invention). I think there are lots of other things going on there over and above the moderation issue, but one is that the early Internet culture was very self-selected for people who thought that the ability to talk to people and the ability to access information were morally virtuous. I was going to say that it was self-selected for intellectualism but I know that early Internet participants were often not particularly scholarly or intellectually sophisticated (some of our critics like Langdon Winner, quoted here, or Phil Agre, were way ahead on that score). So, I might say it was self-selected in terms of people who admired some forms of communicative institutions, maybe like people whose self-identity includes being proud of spending time in a library or a bookstore, or who join a debate club. (Both of those applied to me.) This is of course not quite the same thing as intellectual sophistication. People were mean to each other on the early Internet, but ... some kind of "but" belongs here. Maybe "but it was surprising, it wasn't what they expected"? "But it wasn't what they thought it was about"? Nowadays "humane" feels especially surprising as a description of an aspiration for online communications. It's kind of out the window and a lot of us find that our online interactions are much less humane that what we're used to offline. More demonization of outgroups, more fantasies of violence against them, more celebration of violence that actually occurs, more joy that one's opponents are suffering in some way. (I see this as almost fully general and not just a pathology of one community or ideology.) I'm troubled by this both because it's unpleasant and even scary how non-humane a lot of Internet communities and conversation can be, and because it's jarring to see Barlow predict that specific thing and get it wrong that way. Many other things Barlow was optimistic about seem to me to have actually come to pass, although imperfectly or sometimes corruptly, but not this one. |
|
| ▲ | lampiaio 4 hours ago | parent | next [-] |
| The article was interesting to read not necessarily as a generative spark but as a datapoint, a symptom of how effective, in the long run, the response from those who saw the internet as a threat was. Only someone who's lost the plot (or arrived late) would summarily conflate Barlow's 1996 Declaration with "one of those sovereign citizen TikToks where someone in traffic court is claiming diplomatic immunity under maritime law". The article itself has fallen victim to the weaponized co-optation whose framework it describes. The author says "I remember thinking it was genius when I first read it. I was young enough [...]", believing it was due to being impressionable, but it's more likely that it was due to having lost something along the way. Or rather, it was stolen from them and they didn't even realize. The Declaration was right, it was just naively optimistic and severely underestimated its opponent + incorrectly presumed digital natives would automatically be on the "right" side. Now we are where we are. And it's just the beginning of the pendulum's counterswing. |
| |
| ▲ | mindslight 3 hours ago | parent [-] | | Could you please keep going? Maybe I'm just old, tired, and have other responsibilities, but things are feeling pretty bleak these days. Google is back to pushing remote attestation (ie WEI), Apple has already had it for quite some time. "AI" is a great Schelling point excuse for capital structures to collude rather than compete, whether it's demanding identification / "system integrity" (aka computational disenfranchisement) for routine Web tasks or simply making computing hardware unaffordable (and thus even less practical for most people, whether it's GPUs, RAM, or RPis for IoT projects). There are some silver linings like AI codegen empowering individuals to solve their own problems, and/or really go to town hacking/polishing their libre project for others to use. But at best I see a future 5-10 years down the road where I've got a few totally-pwnt corporate-government-approved devices for accomplishing basic tasks (with whatever I/O devices are cost-effective from the subset we're allowed to use), and then my own independent network that cannot do much of what's required to interface with (ie exist in) wider society. | | |
| ▲ | js8 2 hours ago | parent | next [-] | | In many countries, people have already won a similar fight with printing press, press censorship and encryption. I think there is a reason for optimism (of the will). If AI can code, and empower individuals to do it on a local device, it is already smart enough to educate masses on the matters of their self-interest, such as freedom and solidarity. I don't think the powers will be able to gatekeep it. There might be some grief but overall human freedom will prevail. | | | |
| ▲ | TheOtherHobbes 31 minutes ago | parent | prev | next [-] | | I suspect this is correct, and the push towards "age verification" (i.e. user id hiding behind a pretext), the insane build out of server farms, which is making commodity computing unaffordable, and the push towards AI in everything are all pointing in the same direction. The 1990s vision of computing was a bicycle - or car - for the mind. It was libertarian in the sense that if you had a device it would empower you to get where you wanted to go more quickly. And the rhetoric around it was very much about personal exploration on a new and exciting frontier. The 2020s vision is more like a totalitarian transport network where you don't own the vehicle, you don't own the network, there's constant propaganda telling you how to structure your journey to the standard destinations, and deviation is becoming increasingly impossible. The device is just an access port to the network. It's dumbed down, so even if you understand how it works you can't do much with it. And as AI becomes more prevalent, your ability to understand that will diminish further. So the end result is very plausibly a state where you're completely reliant on AI to do anything. And AI is owned by the pseudo-state oligopoly - the same oligopoly which runs the propaganda networks that sell you ads, hype selected content while suppressing other content, and genrally try to influence your behaviour. It's the complete opposite of the original vision. Will consumer AI fix this? Probably not. Even if the hardware keeps improving - debatable - a personal device is never going to be able to compete, in any sense, with an international network of data centres. | | |
| ▲ | vrganj 26 minutes ago | parent [-] | | > Even if the hardware keeps improving - debatable - a personal device is never going to be able to compete, in any sense, with an international network of data centres. There's one way to deal with this, but I doubt it'll be popular in these parts: Communal ownership of the means of production. Don't use the oligarchy's AI. Your personal hardware is going to be too weak. But together, we can own our own server farms. |
| |
| ▲ | schoen 2 hours ago | parent | prev | next [-] | | Alongside "1984 wasn't an instruction manual" we may need the slogan "'The Right to Read' wasn't an instruction manual". | |
| ▲ | throwawayqqq11 10 minutes ago | parent | prev [-] | | The corpolibertarians are betting massively on AI to liberate them from the working class and in their wake, transforming societies and economies as needed. I think this long term goal is delusional and the day of the pitchforks is coming. They can't endlessly fabricate distractive images of enemies, like migrants or what ever, while inflating budgets and claims about the future. |
|
|
|
| ▲ | zozbot234 2 hours ago | parent | prev | next [-] |
| > Specifically, I think the cyberspace civilization, to the extent that it exists, has been a failure lately on "humane" in the broad sense. I disagree. By meaningful real-world standards, the average Internet space is in fact extremely humane and polite. People will bring up the random exceptions where groups of people absolutely hate one another and these hates eventually spill over into online spaces, but that's what these are, limited exceptions. By and large, the average online interaction is potentially far more reflective of desirable human values than the ways complete strangers usually interact offline. Perhaps this is a matter of pure self-selection among a tiny niche of especially intellectually-minded folks, but even if this was the case it would still be creating an affordance that wasn't there before. |
| |
| ▲ | TheOtherHobbes 25 minutes ago | parent [-] | | By meaningful real-world standards there are bot farms sowing dissent and literally driving people into mental illness which has already destroyed many families. At the same time there's the Cambridge Analytica/SCL strand where a corporation literally sells election fixing services that rely on data gathered from social media accounts. To be fair these are all extensions of political and media trends that already existed, and which online tech could amplify by some orders of magnitude. Even so. The damage is very real. One standard technique is to use attack bots to find a wedge issue and weaponise it by raising the temperature from both sides. This can easily be automated now, so we're well past the point where literal humanity is the most important element. |
|
|
| ▲ | Forgeties79 4 hours ago | parent | prev | next [-] |
| > I think there are lots of other things going on there over and above the moderation issue, but one is that the early Internet culture was very self-selected for people who thought that the ability to talk to people and the ability to access information were morally virtuous. Honestly I think it mostly self selected based on who had the technical ability to participate, especially at that time. |
| |
| ▲ | blatherard 2 hours ago | parent [-] | | Also early internet access was gated by institutions. Most people were using their work or school internet access to be online, and so behavior was naturally more controlled. When I was first online (circa 1990), I could have been "kicked off the internet" by my college's IT department. | | |
|
|
| ▲ | mrexcess 3 hours ago | parent | prev | next [-] |
| The revelations that Epstein had interest and involvement in the development of 4chan really makes me wonder what we would find behind the curtain at next iterations like KiwiFarms, etc if we looked hard enough. Not to sound an overly conspiratorial note, but sewing division within a foreign culture is one of those things that intelligence communities excel at, might match some patterns we’ve seen, and would serve to help explain some of the divergence between expectation and reality, here. |
| |
| ▲ | schoen 2 hours ago | parent | next [-] | | There is a theory that some skeptics of tech optimism have advanced for a while, that governments like Internet freedom and widespread availability of ICTs in rival nations because it either (1) makes people there hate and fear each other, or (2) makes them easier to propagandize. In this account the U.S. State Department's Internet Freedom Agenda (which many of my friends and colleagues have been directly funded by) is about destabilizing other countries, while Russian or Chinese spies in turn relish American Internet freedoms because they can stir up conflicts here. I have never endorsed this view but I've run into forms of it again and again and again. Adjacent to it is the idea that some of our prior social harmony was due to a more controlled or at least more homogeneous media landscape. | | |
| ▲ | mrexcess 2 hours ago | parent | next [-] | | I definitely buy into the “monoculture” argument a bit. When hundreds of millions of people are all voraciously consuming the same very limited cultural messaging - three TV stations, a handful of movie studios, a handful of major book publishers - there is bound to be a leveling of interpersonal expectations that will be absent in a more fragmented culture. That’s not some kind of crypto denunciation against cosmopolitan diversity, but it is what it is and I do think there’s a there, there. | |
| ▲ | vrganj 24 minutes ago | parent | prev [-] | | You can see this playing out right now, with X spreading holocaust denial and all sorts of corrosive messages in Europe, with it's owner being actively hostile to European institutions and the US government actively guarding it from consequences. |
| |
| ▲ | paganel 2 hours ago | parent | prev [-] | | > The revelations that Epstein had interest and involvement in the development of 4chan really makes me wonder what we would find behind the curtain at next iterations like KiwiFarm For starters, that Putin was right when he was calling the internet a CIA project back in 2010, 2011, those whereabouts. Later edit: From 2013 [1]: > Barlow: Let me give you an example: I have been advising the CIA and NSA for many years, trying to get them to use open sources of information. If the objective is really to find out what is going on, the best way to do this, is by trading on the information market where you give information to get information. [1] https://www.huffpost.com/entry/i-want-to-tear-down-the-v_b_4... |
|
|
| ▲ | Barrin92 4 hours ago | parent | prev [-] |
| >has been a failure lately on "humane" in the broad sense. I never saw this as surprising because cyber-libertarianism reads like Gnosticism to me. Even in the sentence you quoted there's already the subtext of being left out "more human than your government" etc. (odd choice of possessive for a man who was campaign coordinator for Dick Cheney) The people who were into this stuff tended to have an unhealthy relationship to their physical bodies, physical community, felt excluded, tended to have an Enders Game psychology of feeling both inferior and superior at the same time (extremely bad combination for people with power), equipped with the secret cyber knowledge that would give them access to some new space nobody else knew off, and I was never surprised that you got Peter Thiel and Palantir out of this instead of a digital utopia. |