Remix.run Logo
tucnak 7 hours ago

Not only it's an opinion piece disguised as scientific "Article" with veneer of law, it has all the hallmarks of quackery: flowery language full of allegory and poetic comparisons, hundreds of superficial references from every area imaginable—sprinkled throughout, including but not limited to—Medium blog posts, news outlets, IBM one-page explainers, random sociology literature from the 40's, 60's and 80's, etc.

It reads like a trademark attorney—turned academic got himself interested in "data" and "privacy," wrote a book about it in 2018, and proceeded to be informed on the subject of AI almost exclusively by journalists from popular media outlets like Wired/Engaget/Atlantic—to bring it all together by shoddily referencing his peers at Harvard and curiously-sounding 80's sociology. But who cares as long as AI bad, am I right?

46493168 6 hours ago | parent [-]

Are there any particular points you want to refute?

tucnak 5 hours ago | parent [-]

I'm finding it hard to identify any particulars in this piece, considering the largely self-defeating manner in which the arguments are presented, or should I say, compiled, from popular media. Had it not been endorsed by Stanford in some capacity, and sensationalised by means of punchy headline, we wouldn't be having this conversation in the first place! Now, much has been said about various purported externalities of LLM technology, and continues so, on a daily basis—here in Hacker News comments, if not elsewhere. Between wannabe ethicists and LessWrong types, contemplating the meaning of the word "intelligence," we're in no short supply of opinions on AI.

If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein; indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities. Part because it's what LLM "does"—compute arbitrary discourses, and part because that is what all good humanities end up doing—examining arbitrary discourses, not unlike the current affairs they cite in the opinion piece at hand, for arguments that they present, and ultimately, the language used to construct these arguments. If we're going to be concerned with AI like that, we shall start by making effort to avoid all kinds of language games that allow frivolously substituting "what AI does" for "what people do with AI."

This may sound simple, obvious even, but it also happens to be much easier said than done.

That is not to say that AI doesn't make a material difference to what people would otherwise do without it, but exactly like all of language is a tool—a hammer, if you will, that only gains meaning during use—AI is not different in that respect. For the longest time, humans had monopoly on computing of arbitrary discourses. This is why lawyers exist, too—so that we may compute certain discourses reliably. What has changed is now computers get to do it, too; currently, with varying degree of success. For "AI" to "destroy institutions," or in other words, for it doing someone's bidding to some undesirable end, something in the structure of said institutions must allow that in the first place! If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions.