| ▲ | alwayseasy 4 hours ago |
| Note this is the asbtract, so please let's not debate the abstract... The link to download the paper is here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623 |
|
| ▲ | rpdillon 4 hours ago | parent | next [-] |
| I already debated this on HN when this was posted two days ago, but this paper is not peer-reviewed and is a draft. The examples it uses of DOGE and of the FDA using AI are not well researched or cited. Just as an example, they criticize the FDA for using an AI that can hallucinate whole studies, but they don't talk about the fact that it's used for product recalls, and the source that they use to cite their criticism is an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency. Basically what I'm saying is the more you dig into this paper, the more you realize it's an opinion piece. |
| |
| ▲ | tucnak 4 hours ago | parent | next [-] | | Not only it's an opinion piece disguised as scientific "Article" with veneer of law, it has all the hallmarks of quackery: flowery language full of allegory and poetic comparisons, hundreds of superficial references from every area imaginable—sprinkled throughout, including but not limited to—Medium blog posts, news outlets, IBM one-page explainers, random sociology literature from the 40's, 60's and 80's, etc. It reads like a trademark attorney—turned academic got himself interested in "data" and "privacy," wrote a book about it in 2018, and proceeded to be informed on the subject of AI almost exclusively by journalists from popular media outlets like Wired/Engaget/Atlantic—to bring it all together by shoddily referencing his peers at Harvard and curiously-sounding 80's sociology. But who cares as long as AI bad, am I right? | | |
| ▲ | 46493168 3 hours ago | parent [-] | | Are there any particular points you want to refute? | | |
| ▲ | tucnak 2 hours ago | parent [-] | | I'm finding it hard to identify any particulars in this piece, considering the largely self-defeating manner in which the arguments are presented, or should I say, compiled, from popular media. Had it not been endorsed by Stanford in some capacity, and sensationalised by means of punchy headline, we wouldn't be having this conversation in the first place! Now, much has been said about various purported externalities of LLM technology, and continues so, on a daily basis—here in Hacker News comments, if not elsewhere. Between wannabe ethicists and LessWrong types, contemplating the meaning of the word "intelligence," we're in no short supply of opinions on AI. If you'd like to hear my opinion, I happen to think that LLM technology is the most important, arguably the only thing, to have happened in philosophy since Wittgenstein; indeed, Wittgenstein presents the only viable framework for comprehending AI in all of humanities. Part because it's what LLM "does"—compute arbitrary discourses, and part because that is what all good humanities end up doing—examining arbitrary discourses, not unlike the current affairs they cite in the opinion piece at hand, for arguments that they present, and ultimately, the language used to construct these arguments. If we're going to be concerned with AI like that, we shall start by making effort to avoid all kinds of language games that allow frivolously substituting "what AI does" for "what people do with AI." This may sound simple, obvious even, but it also happens to be much easier said than done. That is not to say that AI doesn't make a material difference to what people would otherwise do without it, but exactly like all of language is a tool—a hammer, if you will, that only gains meaning during use—AI is not different in that respect. For the longest time, humans had monopoly on computing of arbitrary discourses. This is why lawyers exist, too—so that we may compute certain discourses reliably. What has changed is now computers get to do it, too; currently, with varying degree of success. For "AI" to "destroy institutions," or in other words, for it doing someone's bidding to some undesirable end, something in the structure of said institutions must allow that in the first place! If it so happens that AI can help illuminate these things, like all good tools in philosophy of language do, it also means that we're in luck, and there's hope for better institutions. |
|
| |
| ▲ | chrisjj 2 hours ago | parent | prev | next [-] | | > an Engaget article that is covering a CNN article that got the facts wrong, since it relied on anonymous sources that were disgruntled employees that had since left the agency. Disgruntled doesn't mean inaccurate. | | | |
| ▲ | bayindirh 4 hours ago | parent | prev [-] | | This is what drafts for. It's either a very rough draft with some errors and room for improvement, or a very bad draft sitting on the wrong foundation. Either way, it's an effort, and at least the authors will learn to not to do. | | |
| ▲ | yunohn 4 hours ago | parent [-] | | No, it’s definitely not what drafts are for. Fundamental issues of the nature pointed out by the parent comment are way too serious to make it into a draft. Drafts are for minor fixes and changes, as per the usual meaning of the word draft. |
|
|
|
| ▲ | randusername 3 hours ago | parent | prev | next [-] |
| > Institutions like higher
education, medecine, and law inform the stable and predictable patterns of
behavior within organizations such as schools, hospitals, and courts.,
respectively,, thereby reducing chaos and friction. Hard to take seriously with so many misspellings and duplicate punctuation. I vibe with the general "AI is bad for society" tone, but this argument feels a lot to me like "piracy is bad for the film industry" in that there is no recognition of why it has an understandable appeal with the masses, not just cartoon villains. Institutions bear some responsibility for what makes AI so attractive. Institutional trust is low in the US right now; journalism, medicine, education, and government have not been living up to their ideals. I can't fault anyone for asking AI medical questions when it is so complex and expensive to find good, personalized healthcare, or for learning new things from AI when access to an education taught by experts is so costly and selective. |
| |
| ▲ | omnicognate 3 hours ago | parent [-] | | > Hard to take seriously with so many misspellings and duplicate punctuation. Very bad writing, too, with unnecessarily complicated constructions and big words seemingly used without a proper understanding of what they mean (machinations, affordances). |
|
|
| ▲ | njhnjh 4 hours ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | bayindirh 4 hours ago | parent | next [-] | | It's funny how many of us know the shortcomings of AI, yet we can't be bothered to do the thing ourselves and read or at least skim a in-depth research paper to increase our depth. Even if we don't agree with what we read, or find its flaws. Paradox of the century. P.S.: Using ChatGPT to summary something you don't bother to skim while claiming AI is a scam is the cherry on top. | |
| ▲ | rpdillon 4 hours ago | parent | prev | next [-] | | I read the entire paper a couple of days ago and have done a lot of work to critique it because I think it is flawed in several ways. Ironically, this AI summary is actually quite accurate. You're getting down voted because posting AI output is not condoned, but that doesn't mean that in this case it is not correct. | | |
| ▲ | Sharlin 4 hours ago | parent [-] | | They're getting downvoted because without even taking a look at the paper, they felt that "please create a summary of the stupid, bad faith, idiot, fake science paper" is a reasonable way to ask for a summary. | | |
| ▲ | rpdillon 4 hours ago | parent [-] | | Okay, but does anybody care that the paper that's in the link doesn't substantiate its central claim with empirical evidence? |
|
| |
| ▲ | cybersquare 4 hours ago | parent | prev [-] | | It's ok, I'll just read the AI summary... |
|