| ▲ | roenxi 2 hours ago | ||||||||||||||||
Just because something didn't work out doesn't mean it was a waste, and it isn't particularly clear that the the LLM boom was wasted, or that it is over, or that it isn't working. I can't figure out what people mean when they say "AGI" any more, we appear to be past that. We've got something that seems to be general and seems to be more intelligent than an average human. Apparently AGI means a sort of Einstein-Tolstoy-Jesus hybrid that can ride a unicycle and is far beyond the reach of most people I know. Also, if anyone wants to know what a real effort to waste a trillion dollars can buy ... https://costsofwar.watson.brown.edu/ | |||||||||||||||||
| ▲ | austin-cheney 2 hours ago | parent | next [-] | ||||||||||||||||
> Just because something didn't work out doesn't mean it was a waste Its all about scale. If you spend $100 on something that didn't work out that money wasn't wasted if you learned something amazing. If you spend $1,000,000,000,000 on something that didn't work out the expectation is that you learn something close to 1,000,000,000x more than the $100 spend. If the value of learning is several orders of magnitude less than the level of investment there is absolutely tremendous waste. For example: nobody qualifies spending a billion dollars on a failed project as value if your learning only resulted in avoiding future paper cuts. | |||||||||||||||||
| ▲ | Deegy 2 hours ago | parent | prev | next [-] | ||||||||||||||||
We currently have human-in-the-loop AGI. While it doesn't seem we can agree on a meaning for AGI, I think a lot of people think of it as an intelligent entity that has 100% agency. Currently we need to direct LLM's from task to task. They don't yet posses the capability of full real world context. This is why I get confused when people talk about AI replacing jobs. It can replace work, but you still need skilled workers to guide them. To me, this could result in humans being even more valuable to businesses, and result in an even greater demand for labor. If this is true, individuals need to race to learn how to use AI and use it well. | |||||||||||||||||
| |||||||||||||||||
| ▲ | getnormality an hour ago | parent | prev | next [-] | ||||||||||||||||
AI capabilities today are jagged and people look at what they want to. Boosters: it can answer PhD-level questions and it helps me a lot with my software projects. Detractors: it can't learn to do a task it doesn't already know how to do. Boosters: But actually it can actually sometimes do things it wouldn't be able to do otherwise if you give it lots of context and instructions. Detractors: I want it to be able to actually figure out and retain the context itself, without being given detailed instructions every time, and do so reliably. Boosters: But look, in this specific case it sort of does that. Detractors: But not in my case. Boosters: you're just using it wrong. There must be something wrong with your prompting strategy or how you manage context. etc etc etc... | |||||||||||||||||
| ▲ | JKCalhoun 25 minutes ago | parent | prev | next [-] | ||||||||||||||||
I think when people say "AGI" they might mean synthesis [1]. I'm not sure I have seen that yet in LLMs. Someone correct me if I'm wrong. | |||||||||||||||||
| ▲ | bryanlarsen an hour ago | parent | prev | next [-] | ||||||||||||||||
AFAICT "AGI" is a placeholder for peoples fears and hopes for massive change caused by AI. The singularity, massive job displacement, et cetera. None of this is a binary, though. We already have AGI that is superhuman in some ways and subhuman in others. We are already using LLM's to help improve themselves. We already have job displacement. That continuum is going to continue. AI will become more superhuman in some ways, but likely stay subhuman in others. LLM's will help improve themselves. Job displacement will increase. Thus the question is whether this rate of change will be fast or slow. Seems mundane, but it's a big deal. Humans can adapt to slow changes, but not so well to fast ones. Thus AGI is a big deal, even if it's a crap stand in for the things people care about. | |||||||||||||||||
| ▲ | orwin 2 hours ago | parent | prev | next [-] | ||||||||||||||||
> Just because something didn't work out doesn't mean it was a waste Here i think it's more about opportunity cost. > I can't figure out what people mean when they say "AGI" any more, we appear to be past that What i ask of an AGI is to not hallucinate idiotic stuff. I don't care about being bullshitted too much if the bullshit is logic, but when i ask "fix mypy errors using pydantic" and instead of declaring a type for a variable it invent weird algorithms that make no sense and don't work (and the fix would have taken 5 minutes for any average dev).I mean, Claude 4.5 and Codex have replaced my sed/search and replaces, write my sanity tests, write my commit comment, write my migration scripts (and most of my scripts), and make refactor so easy i now do one refactor every month or so, but if it is AGI, i _really_ wonder what people mean by intelligence. > Also, if anyone wants to know what a real effort to waste a trillion dollars can buy 100% agree. Pleas Altman, Ilya and other, i will hapilly let you use whatever money you want if that money is taken from war profiteers and warmongers. | |||||||||||||||||
| ▲ | embedding-shape 2 hours ago | parent | prev | next [-] | ||||||||||||||||
> Just because something didn't work out doesn't mean it was a waste One thing to keep in mind, is that most of these people who go around spreading unfounded criticism of LLMs, "Gen-AI" and just generally AI aren't usually very deep into understanding computer science, and even less science itself. In their mind, if someone does an experiment, and it doesn't pan out, they'll assume that means "science itself failed", because they literally don't know how research and science work in practice. | |||||||||||||||||
| |||||||||||||||||
| ▲ | pdimitar an hour ago | parent | prev [-] | ||||||||||||||||
Eh, tearing down a straw man is not an impressive argument from you either. As a counter-point, LLMs still do embarrassing amounts of hallucinations, some of which are quite hilarious. When that is gone and it starts doing web searches -- or it has any mechanisms that mimic actual research when it does not know something -- then the agents will be much closer to whatever most people imagine AGI to be. Have LLMs learned to say "I don't know" yet? | |||||||||||||||||
| |||||||||||||||||