| |
| ▲ | immibis 9 hours ago | parent | next [-] | | > productively used This chart is extremely damning: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... The industry consistently predicts people will do the task quicker with AI. The people who are doing the task predict they'll do it quicker if they can use AI. After doing the task with AI, they predict they did it quicker because they used AI. People who did it without AI predict they could have done it quicker with AI. But they actually measured how long it takes. It turns out, they do it slower if they use AI. This is damning. It's a dopamine machine. It makes you feel good, but with no reality behind it and no work to achieve it. It's no different in this regard from (some) hard drugs. A rat with a lever wired to the pleasure center in its brain keeps pressing that lever until it dies of starvation. (Yes, it's very surprising that you can create this effect without putting chemicals or electrodes in your brain. Social media achieved it first, though.) | |
| ▲ | OtomotO 16 hours ago | parent | prev [-] | | No, it's overinvestment. And I don't see how most people are divided in two groups or appear to be. Either it's total shit, or it's the holy cup of truth, here to solve all our problems. It's neither. It's a tool. Like a shovel, it's good at something. And like a shovel it's bad at other things. E.g. I wouldn't use a shovel to hammer in a nail. LLMs will NEVER become true AGI. But do they need to? No, or course not! My biggest problem with LLMs isn't the shit code they produce from time to time, as I am paid to resolve messes, it's the environmental impact of MINDLESSLY using one. But whatever. People like cults and anti-cults are cults too. | | |
| ▲ | dr_dshiv 15 hours ago | parent | next [-] | | Your concern is the environmental impact? Why pick on LLMs vs Amazon or your local drug store? Or a local restaurant, for that matter? Do the calculations for how much LLM use is required to equal one hamburger worth of CO2 — or the CO2 of commuting to work in a car. If my daily LLM environmental impact is comparable to my lunch or going to work, it’s really hard to fault, IMO. They aren’t building data centers in the rainforest. | | |
| ▲ | OtomotO 13 hours ago | parent [-] | | Why do you assume I am not concerned about the other sources of environmental impact? Of course I don't go around posting everything I am concerned about when we are talking about a specific topic. You're aware tho, that because of the AI hype sustainability programs were cut at all major tech firms? | | |
| ▲ | dr_dshiv 9 hours ago | parent [-] | | It also correlated with the discovery that voluntary carbon credits weren’t sufficient for their environmental marketing. If carbon credits were viewed as valid, I’m pretty sure they would have kept the programs. |
|
| |
| ▲ | ben_w 15 hours ago | parent | prev | next [-] | | I broadly agree with your point, but would also draw attention to something I've observed: > LLMs will NEVER become true AGI. But do they need to? No, or course not! Everyone disagrees about the meaning of each of the three letters of the initialism "AGI", and also disagree about the compound whole and often argue it means something different than the simple meaning of those words separately. Even on this website, "AGI" means anything from "InstructGPT" (the precursor to ChatGPT) to "Biblical God" — or, even worse than "God" given this is a tech forum, "can solve provably impossible task such as the halting problem". | | |
| ▲ | OtomotO 13 hours ago | parent [-] | | Well, I go by the definition I was brought up with and am not interesting and redefining words all the time. A true AGI is basically Skynet or the Basilisk ;-) | | |
| ▲ | ben_w 11 hours ago | parent [-] | | Most of us are so; but if we're all using different definitions then no communication is possible. |
|
| |
| ▲ | TeMPOraL 16 hours ago | parent | prev | next [-] | | There are two different groups with different perspectives and relationships to the "AI hype"; I think we're talking in circles in this subthread because we're talking about different people. See https://news.ycombinator.com/item?id=44208831. Quoting myself (sorry): > For me, one of the Beneficiaries, the hype seems totally warranted. The capability is there, the possibilities are enormous, pace of advancement is staggering, and achieving them is realistic. If it takes a few years longer than the Investor group thinks - that's fine with us; it's only a problem for them. | |
| ▲ | modo_mario 14 hours ago | parent | prev | next [-] | | > it's the environmental impact of MINDLESSLY using one. Isn't much of that environmental impact currently from the training of the model rather than the usage?
Something you could arguably one day just stop doing if you're satisfied with the progress on that front (People won't be any time soon admittedly) I'm no expert on this front. It's a genuine question based on what i've heard and read. | |
| ▲ | blackoil 12 hours ago | parent | prev [-] | | Overinvestment isn't a bug. It is a feature of capitalism. When the dust settles there'll be few trillion-dollar pots and 100s of billion are being spent to get one of them. Environmental impacts of GenAI/LLM ecosystem are highly overrated. |
|
|