| ▲ | BrenBarn 21 hours ago |
| It's become almost comical to me to read articles like this and wait for the part that, in this example, comes pretty close to the beginning: "This isn’t a rant against AI." It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI. To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them. |
|
| ▲ | ketzo 21 hours ago | parent | next [-] |
| It’s already becoming politicized, in the lowercase-p sense of the word. One is assumed to be either pro- or anti-AI, and so you gotta do your best to signal to the reader where you lie. |
| |
| ▲ | ZYbCRq22HbJ2y7 21 hours ago | parent [-] | | > so you gotta do your best to signal to the reader where you lie Or what? | | |
| ▲ | brain5ide 20 hours ago | parent [-] | | Or the reader will put you into a category yourself and won't be willing to look at the essence of the argument. I'd say the better word for that is polarising than political, but they synonims these days. |
|
|
|
| ▲ | overgard 21 hours ago | parent | prev | next [-] |
| Well I mean, nitpick, but Fentanyl is a useful medication in the right context. It's not inherently evil. I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. I'm deeply concerned that our technocrats are running full speed at AGI with like zero plan for what happens if it "disrupts" 50% of jobs in a shockingly short period of time, or worse outcomes (theres some evidence the new tariff policies were generated with LLMs.. its probably already making policy. But it could be worse. What happens when bad actors start using these things to intentionally gaslight the population?) But I actually think AI (not AGI) as an assistant can be helpful. |
| |
| ▲ | Terr_ 20 hours ago | parent | next [-] | | > I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. [...] (not AGI) Speaking of Wisdom and a different "AGI", I think there's an old Dungeons and Dragons joke that can be reworked here: Intelligence is knowing than an LLM uses vector embeddings of tokens. Wisdom is knowing LLMs shouldn't be used for business rules. | |
| ▲ | brain5ide 20 hours ago | parent | prev | next [-] | | Are we talking about structural things or about individual perspective things? At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV. At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit. Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away. | | |
| ▲ | walterbell 20 hours ago | parent [-] | | > build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner A few thousand years of pre-LLM primary sources remain available for evaluation by humans and LLMs. | | |
| ▲ | coryrc 20 hours ago | parent | next [-] | | You and I remember pre-AI famous works. "Hey, I'm pretty sure Odysseus took a long time to get home". Somebody goes and prints 50 different AI-generated versions of the _Odyssey_, how are future generations supposed to know which is real and which is fake? | | |
| ▲ | walterbell 19 hours ago | parent | next [-] | | > how are future generations supposed to know which is real Reality/truth/history has always been an expensive pursuit in the face of evolving pollutants. | | |
| ▲ | coryrc 6 hours ago | parent [-] | | That's definitely true. History has been thoroughly manufactured by humans. Naively, I thought the storage of computers might preserve first-hand accounts forever; it might, but it might not be discernible. |
| |
| ▲ | noosphr 19 hours ago | parent | prev [-] | | This is literally how the Odyssey was passed down for the 2000 years before the printing press was invented. Every work had multiple versions. All versions were different. Some versions were diametrically opposed to others. Have a look at Bible scholarship to see just _how_ divergent texts can become by nothing more than scribe errors. | | |
| ▲ | coryrc 6 hours ago | parent | next [-] | | They were real because they were made by people all along. Now you can't tell. I think you're right my analogy is imperfect. I'm only human (or am I? :P) | |
| ▲ | samtheprogram 19 hours ago | parent | prev [-] | | 99.9999999% sure that was their point? Why else would they bring up that particular work? | | |
| ▲ | burnished 6 hours ago | parent [-] | | Because they thought it was an ancient and unchanging text. | | |
| ▲ | coryrc 5 hours ago | parent [-] | | No, but it was a bad example because I was thinking only of the authorship point of view. A better example would have been the complaint tablet to Ea-nāṣir. We're pretty sure it's real; there might still be people alive that remember it being discovered. But in a hundred years, people with gen AI have created museums of fake artifacts but plausible, can future people be sure? A good fraction of the US population today believes wildly untrue things about events happening in real time! |
|
|
|
| |
| ▲ | namaria 18 hours ago | parent | prev [-] | | I know how to swim yet a riptide can still pull me out to sea | | |
|
| |
| ▲ | spooky_action 17 hours ago | parent | prev | next [-] | | What evidence is there that tarrif policy was LLM generated? | | |
| ▲ | calcifer 16 hours ago | parent | next [-] | | There are uninhabited islands on the list. | | |
| ▲ | KoolKat23 15 hours ago | parent [-] | | Despite people's ridicule this is normal practice, prevents loopholes being exploited. | | |
| ▲ | mr_toad 14 hours ago | parent [-] | | It seems more likely that bad data was involved. There are actually export statistics (obviously errors, possibly fraud) for these islands. Someone probably stuck the numbers in a formula without digging a little deeper. | | |
|
| |
| ▲ | af78 16 hours ago | parent | prev [-] | | There are people who asked several AI engines (ChatGPT, Grok etc.) “what should the tariff policy be to bring the trade balance to zero?” (quoting from memory) an the answer was the formula used by the Trump administration. If I find the references I will post them as a follow-up. Russia, North Korea and handful of other countries were spared, likely because they sided with the US and Russia at the UN General Assembly on Feb 24 of this year, in voting against “Advancing a comprehensive, just and lasting peace in Ukraine.” https://digitallibrary.un.org/record/4076672 EDIT: Found it: https://nitter.net/krishnanrohit/status/1907587352157106292 Also discussed here: https://www.latintimes.com/trump-accused-using-chatgpt-creat... The theory was first floated by Destiny, a popular political commentator. He accused the administration of using ChatGPT to calculate the tariffs the U.S. is charged by other countries, "which is why the tariffs make absolutely no fucking sense." "They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater," Destiny, who goes by @TheOmniLiberal on X, shared in a post on Wednesday. > I think they asked ChatGPT to calculate the tariffs from other countries, which is why the tariffs make absolutely no fucking sense. > They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater. https://t.co/Rc45V7qxHl pic.twitter.com/SUu2syKbHS > — Destiny | Steven Bonnell II (@TheOmniLiberal) April 2, 2025 He attached a screenshot of his exchange with the AI bot. He started by asking ChatGPT, "What would be an easy way to calculate the tariffs that should be imposed on other countries so that the US is on even-playing fields when it comes to trade deficit? Set minimum at 10%." "To calculate tariffs that help level the playing field in terms of trade deficits (with a minimum tariff of 10%), you can use a proportional tariff formula based on the trade deficit with each country. The idea is to impose higher tariffs on countries with which the U.S. has larger trade deficits, thus incentivizing more balanced trade," the bot responded, along with a formula to use. John Aravosis, an influencer with a background in law and journalism, shared a TikTok video that then outlined how each tariff was calculated; by essentially taking the U.S. trade deficit with the country divided by the total imports from that country to the U.S. "Guys, they're setting U.S. trade policy based on a bad ChatGPT question that got it totally wrong. That's how we're doing trade war with the world," Aravosis proclaimed before adding the stock market is "totally crashing." |
| |
| ▲ | XorNot 18 hours ago | parent | prev [-] | | Honestly this post seems like misplaced wisdom to me: your concern is the development of AGI displacing jobs and not the numerous reliability problems with the analytic use of AI tools in particular the overestimate of LLM capabilities because they're good at writing pretty prose? If we were headed straight to the AGI era then hey, problem solved - intelligent general machines which can advance towards solutions in a coherent if not human like fashion is one thing but that's not what AI is today. AI today is enormously unreliable and very limited in a dangerous way - namely it looks more capable then it is. |
|
|
| ▲ | croes 20 hours ago | parent | prev | next [-] |
| It’s a rant against the wrong usage of a tool not the tool as such. |
| |
| ▲ | Turskarama 20 hours ago | parent | next [-] | | It's a tool that promotes incorrect usage though, and that is an inherent problem. All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out. | | |
| ▲ | Terr_ 18 hours ago | parent | next [-] | | My personal pet-peeve is how a great majority of people--and too many developers--are being misled into believing a fictional character coincidentally named "Assistant" inside a story-document half-created by an LLM is the author-LLM. If a human generates a story containing Count Dracula, that doesn't mean vampires are real, or that capabilities like "turning into a cloud of bats" are real, or that the algorithm "thirsts for the blood of the innocent." The same holds when the story comes from an algorithm, and it continues to hold when story is about a differently-named character named "AI Assistant" who is "helpful". Getting people to fall for this illusion is great news for the companies though, because they can get investor-dollars and make sales with the promise of "our system is intelligent", which is true in the same sense as "our system converts blood into immortality." | |
| ▲ | croes 18 hours ago | parent | prev | next [-] | | That's the real danger of AI. The false promises of the AI companies and the false expectations of the management and users. Had it just recently for a data migration where the users asked if they still need to enter meta data for documents they just could use AI to query data that was usually based on that meta data. They trust AI before it's even there and don't even consider a transition period where they check if the result are correct. Like with security convenience prevails. | | |
| ▲ | blackqueeriroh 18 hours ago | parent [-] | | But isn’t this just par for the course with every new technological revolution? “It’ll change everything!” they said, as they continued to put money in their pockets as people were distracted by the shiny object. | | |
| ▲ | croes 12 hours ago | parent [-] | | With every revolution and with every fake revolution. NFTs didn't change much, money changed its owner |
|
| |
| ▲ | xpe 16 hours ago | parent | prev [-] | | > All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out. If your LLM + pre-prompt setup sounds confident with every response, something is probably wrong; it doesn't have to be that way. It isn't for me. I haven't collected statistics, but I often get decent nuance back from Claude. Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles. This is not dismissing the tendency for overconfidence, sycophancy, and more. I'm just sharing some mitigations. | | |
| ▲ | GeoAtreides 16 hours ago | parent | next [-] | | > Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles. Ask on a Wednesday. During a full moon. While in a shipping container. Standing up. Keep a black box on your desk as the sacred GenAI avatar and pray to it. Ask while hopping on one leg. | | | |
| ▲ | Turskarama 12 hours ago | parent | prev [-] | | Here's the root of the problem though, how do you know that the AI is actually "thinking" more carefully, as opposed to just pretending to? The short answer is: you can know for a fact that it _isn't_ thinking more carefully because LLMs don't actually think at all, they just parrot language. LLMs are performing well when they are putting out what you want to hear, which is not necessarily a well thought out answer but rather an answer that LOOKS well thought out. | | |
| ▲ | xpe 7 hours ago | parent [-] | | 1. I don't think the comment above gets to the "root" of the problem, which is "the LLM appears overconfident". Thankfully, that problem is relatively easy to address by trying different LLMs and different pre-prompts. Like I said, your results might vary. 2. While the question of "is the AI thinking" is interesting, I think it is a malformed question. Think about it: how do you make progress on that question, as stated? My take: it is unanswerable without considerable reframing. It helps to reframe toward something measurable. Here, I would return to the original question: to what degree does an LLM output calibrated claims? How often does it make overconfident claims? Underconfident claims? 3. Pretending requires at least metacognition, if not consciousness. Agree? It is a fascinating question to explore how much metacognition a particular LLM demonstrates. In my view, this is still a research question, both in terms of understanding how LLM architectures work as well as designing good evals to test for metacognition. In my experience, when using chain-of-thought, LLMs can be quite good at recognizing previous flaws, including overconfidence, meaning that if one is careful, the LLM behaves as if it has a decent level of metacognition. But to see this, the driver (the human) must demonstrate discipline. I'm skeptical that most people prompt LLMs rigorously and carefully. 4. It helps discuss this carefully. Word choice matters a lot with AI discussions, much more than a even a relatively capable software developer / hacker is comfortable with. Casual phrasings are likely to lead us astray. I'll make a stronger claim: a large fraction of successful tech people haven't yet developed clear language and thinking about discussing classic machine learning, much less AI as a field or LLMs in particular. But many of these people lack the awareness or mindset to remedy this; they fall into the usual overconfidence or lack-of-curiosity traps. 5. You wrote: "LLMs are performing well when they are putting out what you want to hear." I disagree; instead, I claim people, upon reflection, would prefer an LLM be helpful, useful, and true. This often means correcting mistakes or challenging assumptions. Of course people have short-term failure modes, such is human nature. But when you look at most LLM eval frameworks, you'll see that truth and safety matter are primary factors. Yes-manning or sycophancy is still a problem. 6. Many of us have seen the "LLMs just parrot language" claim repeated many times. After having read many papers on LLMs, I wouldn't use the words "LLMs just parrot language". Why? That phrase is more likely to confuse discussion than advance it. I recommend this to everyone: instead of using that phrase, challenge yourself to articulate at least two POVs relating to the "LLMs are stochastic parrots" argument.
Discuss with a curious friend or someone you respect. If it is just someone online you don't know, you might simply dismiss them out of hand. The "stochastic parrot" phrase is fun and is a catchy title for an AI researcher who wants to get their paper noticed. But isn't a great phrase for driving mutual understanding, particularly not on a forum like HN where our LLM foundations vary widely. Having said all this, if you want to engage on the topic at the object level, there are better fora than HN for it. I suggest starting with a literature review and finding an ML or AI-specific forum. 7. There is a lot of confusion and polarization around AI. We are capable of discussing better, but (a) we have to want to; (b) we have to learn now; and (c) we have to make time to do it. Like I wrote in #6, above, be mindful of where you are discussing and the level of understanding of people around. I've found HN to be middling on this, but I like to pop in from time to time to see how we're doing. The overconfidence and egos are strong here, arguably stronger than the culture and norms that should help us strive for true understanding. 8. These are my views only. I'm not "on one side", because I reject the false dichotomy that AI-related polarization might suggest. |
|
|
| |
| ▲ | mike_hearn 18 hours ago | parent | prev [-] | | Well, it's actually a rant about AI making what the author perceives as mistakes. Honestly it reads like the author is attempting to show off or brag by listing imaginary mistakes an AI might have made, but they are all the sort of mistakes a human could make too. And the fact that they are not real incidents, significantly weakens his argument. He is a consultant who sells training services so obviously if people come to rely on AI more for this kind of thing he will be out of work. It does not help that his examples of things an imaginary LLM might miss are all very subjective and partisan too. |
|
|
| ▲ | dragonwriter 21 hours ago | parent | prev | next [-] |
| Because "rant" is irrational, and the author wants to be seen as staking out a rational opposition. Of course, every ranter wants to be seen that way, and so a protest that something isn't a rant against X is generally a sign that it absolutely is a rant against X that the author is pre-emptively defending. |
| |
| ▲ | voxl 21 hours ago | parent | next [-] | | I've rarely read a rant that didn't consist of some good logical points | | |
| ▲ | 20 hours ago | parent | next [-] | | [deleted] | |
| ▲ | croes 20 hours ago | parent | prev [-] | | Doesn‘t mean listing logical points makes it a rant | | |
| ▲ | throwaway290 18 hours ago | parent [-] | | If logical points are all against sth that is debatable then it's a rant. They can be good points tho. | | |
| ▲ | croes 18 hours ago | parent [-] | | • Instead of forming hypotheses, users asked the AI for ideas. • Instead of validating sources, they assumed the AI had already done so. • Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on. These are point against certain actions with a tool not against the tool. AI is for the starting point not the final result. AI must never be the last step but it often is because people trust computers especially if they answer in a confident language. It's the ELIZA effect all over again. |
|
|
| |
| ▲ | YetAnotherNick 18 hours ago | parent | prev [-] | | The classic hallmark of rant is picking some study, not reading the methodology etc and making wild conclusion on it. For example for a study it says: > The study revealed a clear pattern: the more confidence users had in the AI, the less they thought critically And the study didn't even checked that. They just plotted the correlation between how much user think they rely on AI vs how much effort they think they saved. Isn't it expected to be positive even if they think as critically. [1]: https://www.microsoft.com/en-us/research/wp-content/uploads/... |
|
|
| ▲ | SoftTalker 21 hours ago | parent | prev | next [-] |
| The difference is that between a considered critique and unhinged venting. |
|
| ▲ | aprilthird2021 21 hours ago | parent | prev | next [-] |
| The other thing is that the second anyone even perceives an opinion to be "anti-AI" they bombard you with "people thought the printing press lowered intellect too!" Or radio or TV or video games, etc. No one ever considers that maybe they all did lower our attention spans, prevent us from learning as well as we used to, etc. and now we are at a point we can't afford to keep losing intelligence and attention span |
| |
| ▲ | mike_hearn 18 hours ago | parent | next [-] | | I think people don't consider that because the usual criticism of television and video games is that people spend too long paying attention to them. One of the famous Greek philosophers complained that books were hurting people's minds because they no longer memorized information, so this kind of complaint is as old as civilization itself. There is no evidence that we would be on Mars by now already if we had never invented books or television. | | |
| ▲ | pasabagi 17 hours ago | parent [-] | | Pluto? Plotto? Platti? Seriously though, that's a horrible bowdlerization of the argument in the Phaedrus. It's actually very subtle and interesting, not just reactionary griping. | | |
| |
| ▲ | nostrebored 19 hours ago | parent | prev [-] | | That’s a much harder claim to prove. The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate? If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through? I suspect that as the problem spaces diverge enough you’ll have two skill sets. Who can solve n problems the fastest and who can determine which k problems require deep thought and narrow direction. Right now we have the same group of people solving both. | | |
| ▲ | friendzis 17 hours ago | parent [-] | | > The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate? Gell-Mann Amnesia. Attention span limits the amount information of information we can process and with attention spans decreasing, increases to information flow stop having a positive effect. People simply forget what they started with even if that contradicts previous information. > If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through? You don't end up solving the problem in near constant time, you end up applying the last suggested solution. There's a difference. |
|
|
|
| ▲ | yapyap 18 hours ago | parent | prev | next [-] |
| It’s not a rant against fentanyl, it’s a rant against irresponsible use of fentanyl. Just like this is a rant against irresponsible use of AI. Hope this helps |
| |
|
| ▲ | throwaway894345 21 hours ago | parent | prev | next [-] |
| TFA makes the point pretty clear IMHO: they aren’t opposed to AI, they’re opposed to over-reliance on AI. |
|
| ▲ | EGreg 19 hours ago | parent | prev | next [-] |
| Reminds me of people who say “there is nothing wrong with capitalism but…” You shall not criticize the profit! |
|
| ▲ | TacticalCoder 21 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | woah 21 hours ago | parent | prev | next [-] |
| They have to preface their articles with "This isn’t a rant against AI." because there are a lot of rants against AI out there, such as your comment. |
|
| ▲ | johnisgood 18 hours ago | parent | prev [-] |
| Both substances and AI can be used responsibly. It is not the fault of substances nor AI. People is why we can't have anything nice. It sucks. I have medical reasons to take opioids, but in the eyes of people, I am a junkie. I would not be considered a junkie if I kept popping ibuprofen. It is silly. Opioids do not even make me high to begin with (it is complicated). |
| |
| ▲ | johnisgood 14 hours ago | parent [-] | | I bet the downvotes are done by people who have absolutely no need to take any medications, or have no clue what it is like to be called a junkie for the rest of your life for taking medications that were prescribed to begin with. Or if not, then what, is it not true that both substances and AI can be used responsibly, and irresponsibly? "People is why we can't have anything nice. It sucks." is also true, applies to many things, just consider vending machines alone, or bags in public (for dog poop) and anything of the sort. We no longer have bags anymore, because people stole it. A great instance of "this is why we can't have nice things". Pretty sure you can think of more. Make the down-votes make sense, please. (I do not care about the down-votes per se, I care about why I am being disagreed with without any responses.) |
|