▲ | OtomotO 16 hours ago | ||||||||||||||||
No, it's overinvestment. And I don't see how most people are divided in two groups or appear to be. Either it's total shit, or it's the holy cup of truth, here to solve all our problems. It's neither. It's a tool. Like a shovel, it's good at something. And like a shovel it's bad at other things. E.g. I wouldn't use a shovel to hammer in a nail. LLMs will NEVER become true AGI. But do they need to? No, or course not! My biggest problem with LLMs isn't the shit code they produce from time to time, as I am paid to resolve messes, it's the environmental impact of MINDLESSLY using one. But whatever. People like cults and anti-cults are cults too. | |||||||||||||||||
▲ | dr_dshiv 15 hours ago | parent | next [-] | ||||||||||||||||
Your concern is the environmental impact? Why pick on LLMs vs Amazon or your local drug store? Or a local restaurant, for that matter? Do the calculations for how much LLM use is required to equal one hamburger worth of CO2 — or the CO2 of commuting to work in a car. If my daily LLM environmental impact is comparable to my lunch or going to work, it’s really hard to fault, IMO. They aren’t building data centers in the rainforest. | |||||||||||||||||
| |||||||||||||||||
▲ | ben_w 15 hours ago | parent | prev | next [-] | ||||||||||||||||
I broadly agree with your point, but would also draw attention to something I've observed: > LLMs will NEVER become true AGI. But do they need to? No, or course not! Everyone disagrees about the meaning of each of the three letters of the initialism "AGI", and also disagree about the compound whole and often argue it means something different than the simple meaning of those words separately. Even on this website, "AGI" means anything from "InstructGPT" (the precursor to ChatGPT) to "Biblical God" — or, even worse than "God" given this is a tech forum, "can solve provably impossible task such as the halting problem". | |||||||||||||||||
| |||||||||||||||||
▲ | TeMPOraL 16 hours ago | parent | prev | next [-] | ||||||||||||||||
There are two different groups with different perspectives and relationships to the "AI hype"; I think we're talking in circles in this subthread because we're talking about different people. See https://news.ycombinator.com/item?id=44208831. Quoting myself (sorry): > For me, one of the Beneficiaries, the hype seems totally warranted. The capability is there, the possibilities are enormous, pace of advancement is staggering, and achieving them is realistic. If it takes a few years longer than the Investor group thinks - that's fine with us; it's only a problem for them. | |||||||||||||||||
▲ | modo_mario 14 hours ago | parent | prev | next [-] | ||||||||||||||||
> it's the environmental impact of MINDLESSLY using one. Isn't much of that environmental impact currently from the training of the model rather than the usage? Something you could arguably one day just stop doing if you're satisfied with the progress on that front (People won't be any time soon admittedly) I'm no expert on this front. It's a genuine question based on what i've heard and read. | |||||||||||||||||
▲ | blackoil 13 hours ago | parent | prev [-] | ||||||||||||||||
Overinvestment isn't a bug. It is a feature of capitalism. When the dust settles there'll be few trillion-dollar pots and 100s of billion are being spent to get one of them. Environmental impacts of GenAI/LLM ecosystem are highly overrated. |