| ▲ | empath75 4 hours ago |
| It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful. |
|
| ▲ | dbspin 4 hours ago | parent | next [-] |
| I'd consider hallucinations to be a fundamental flaw that currently sets hard limits on the current utility of LLMs in any context. |
| |
| ▲ | SoftTalker 4 hours ago | parent [-] | | I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know. |
|
|
| ▲ | bigstrat2003 4 hours ago | parent | prev | next [-] |
| > Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful. Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality. |
| |
| ▲ | solumunus 3 hours ago | parent [-] | | That’s not my experience at all, I’m getting it done much faster and the quality is on par. It’s hard to measure, but as a small business owner it’s clear to me that I now require fewer new developers. |
|
|
| ▲ | stingraycharles 4 hours ago | parent | prev [-] |
| You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research. At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly. |
| |
| ▲ | bigstrat2003 4 hours ago | parent | next [-] | | > for some reason HN has an extremely strong anti-AI sentiment It's because I've used it and it doesn't come even close to delivering the value that its advocates claim it does. Nothing mysterious about it. | | |
| ▲ | ToValueFunfetti 18 minutes ago | parent [-] | | I think what it comes down to is that the advocates making false claims are relatively uncommon on HN. So, for example, I don't know what advocates you're talking about here. I know people exist who say they can vibe-code quality applications with 100k LoC, or that guy at Anthropic who claims that software engineering will be a dead profession in the first half of '26, and I know that these people tend to be the loudest on other platforms. I also know sober-minded people exist who say that LLMs save them a few hours here and there per week trawling documentation, writing a 200 line SQL script to seed data into a dev db, or finding some off-by-one error in a haystack. If my main or only exposure to AI discourse was HN, I would really only be familiar with the latter group and I would interpret your comment as very biased against AI. Alternatively, you are referring to the latter group and, uh, sorry. |
| |
| ▲ | mrob 3 hours ago | parent | prev | next [-] | | There is no scenario where AI is a net benefit. There are three possibilities: 1. AI does things we can already do but cheaper and worse. This is the current state of affairs. Things are mostly the same except for the flood of slop driving out quality. My life is moderately worse. 2. Total victory of capital over labor. This is what the proponents are aiming for. It's disastrous for the >99% of the population who will become economically useless. I can't imagine any kind of universal basic income when the masses can instead be conveniently disposed of with automated killer drones or whatever else the victors come up with. 3. Extinction of all biological life. This is what happens if the proponents succeed better than they anticipated. If recursively self-improving ASI pans out then nobody stands a chance. There are very few goals an ASI can have that aren't better accomplished with everybody dead. | | |
| ▲ | ToValueFunfetti 6 minutes ago | parent [-] | | What is the motivation for killing off the population in scenario 2? That's a post-scarcity world where the elites can have everything they want, so what more are they getting out of mass murder? A guilty conscience, potentially for some multiple of human lifespans? Considerably less status and fame? Even if they want to do it for no reason, they'll still be happier if their friends and family are alive and happy, which recurses about 6 times before everybody on the planet is alive and happy. |
| |
| ▲ | rtp4me 4 hours ago | parent | prev [-] | | My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so). | | |
| ▲ | seanw444 3 hours ago | parent | next [-] | | While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be. | |
| ▲ | LtWorf 3 hours ago | parent | prev [-] | | Or they might know better than you. A painful idea. |
|
|