| ▲ | influx 3 hours ago |
| At what point do these companies stop releasing models and just use them to bootstrap AGI for themselves? |
|
| ▲ | conradkay 3 hours ago | parent | next [-] |
| Plausibly now. "As we wrote in the Project Glasswing announcement, we do not plan to make Mythos Preview generally available" |
|
| ▲ | margorczynski 27 minutes ago | parent | prev | next [-] |
| I think it is naive to think the government (US or China most probably) will just let some random company control something so powerful and dangerous. |
|
| ▲ | vatsachak 3 hours ago | parent | prev | next [-] |
| When the benchmarks actually mean something |
|
| ▲ | orphea 2 hours ago | parent | prev | next [-] |
| Can LLMs be AGI at all? |
| |
| ▲ | small_model 8 minutes ago | parent | next [-] | | What can a SOTA LLM not answer that the average person can? It's already more intelligent than any polymath that ever existed, it just lacks motivation and agency. | |
| ▲ | dgellow 26 minutes ago | parent | prev | next [-] | | My understanding is no. But the definition of AGI isn’t that well defined and has been evolving, making the assessment pretty much impossible | |
| ▲ | bornfreddy 2 hours ago | parent | prev | next [-] | | Good question. I would guess no - but it could help you build one. Am I mistaken? | | |
| ▲ | bogzz an hour ago | parent | next [-] | | They could help you build an AGI if someone else has already built AGI and published it on GitHub. | | |
| ▲ | unshavedyak 8 minutes ago | parent [-] | | I see this statement all the time and it's just strange to me. Yes, the LLMs struggle to form unique ideas - but so do we. Most advancements in human history are incremental. Built on the shoulders of millions of other incremental advancements. What i don't understand is how we quantify our ability to actually create something novel, truly and uniquely novel. We're discussing the LLMs inability to do that, yet i don't feel i have a firm grasp on what we even possess there. When pressed i imagine many folks would immediately jest that they can create something never done before, some weird random behavior or noise or drawing or whatever. However many times it's just adjacent to existing norms, or constrained by the inversion of not matching existing norms. In a lot of cases our incremental novelties feel, to some degree, inevitable. As the foundations of advancement get closer to the new thing being developed it becomes obvious at times. I suspect this form of novelty is a thing LLMs are capable of. So for me the real question is at what point is innovation so far ahead that it doesn't feel like it was the natural next step. And of course, are LLMs capable of doing this? I suspect for humans this level of true innovation is effectively random. A genius being more likely to make these "random" connections because they have more data to connect with. But nonetheless random, as ideas of this nature often come without explanation if not built on the backs of prior art. So yea.. thoughts? |
| |
| ▲ | nothinkjustai 2 hours ago | parent | prev [-] | | No I think that’s accurate. They seem more like an oracle to me. Or as someone put it here, it’s a vectorization of (most/all?) human knowledge, which we can replay back in various permutations. |
| |
| ▲ | wslh 39 minutes ago | parent | prev | next [-] | | LLMs and human intelligence overlap, but they are not the same. What LLMs show is that we don't need AGI to be impressed. For example, LLMs are not good playing games such as Go [1]. [1] https://arxiv.org/abs/2601.16447 | |
| ▲ | MattRix 42 minutes ago | parent | prev [-] | | I don't see why not, especially with computer use and vision capabilities. Are you talking about their lack of physical embodiment? AGI is about cognitive ability, not physical. Think of someone like Stephen Hawking, an example of having extraordinary general intelligence despite severe physical limitations. |
|
|
| ▲ | mofeien 3 hours ago | parent | prev | next [-] |
| Fictional timeline that holds up pretty well so far: https://ai-2027.com/ |
|
| ▲ | MadnessASAP 2 hours ago | parent | prev | next [-] |
| I would assume somewhere in both the companies there's a Ralph loop running with the prompt "Make AGI". Kinda makes me think of the Infinite Improbability Drive. |
|
| ▲ | sleigh-bells 3 hours ago | parent | prev | next [-] |
| Weird how Claude Code itself is still so buggy though (though I get they don't necessarily care) |
| |
| ▲ | tempest_ 36 minutes ago | parent [-] | | It isnt that weird. Just look at the gemini-cli repo. Its a gong show. The issue is that LLMs can be wrong sometimes sure but more that all the existing SDL were never meant to iterate this quickly. If the system (code base in this case) is changing rapidly it increases the probability that any given change will interact poorly with any other given change. No single person in those code bases can have a working understanding of them because they change so quickly. Thus when someone LGTM the PR was the LLM generated they likely do not have a great understanding of the impact it is going to have. |
|
|
| ▲ | jcims 3 hours ago | parent | prev | next [-] |
| why_not_both.gif |
|
| ▲ | gaigalas 2 hours ago | parent | prev | next [-] |
| It will arrive in the same DLC as flying cars. |
|
| ▲ | ALittleLight 3 hours ago | parent | prev | next [-] |
| Now, I guess. They aren't releasing this one generally. I assume they are using it internally. |
|
| ▲ | dweekly 3 hours ago | parent | prev [-] |
| I mean, guess why Anthropic is pulling ahead...? One can have one's cake and eat it too. |