| ▲ | parasubvert 5 hours ago | |
Someone actually mathed out infinite monkeys at infinite typewriters, and it turns out, it is a great example of how misleading probabilities are when dealing with infinity: "Even if every proton in the observable universe (which is estimated at roughly 1080) were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10^360,641 observable universes made of protonic monkeys." Often infinite things that are probability 1 in theory, are in practice, safe to assume to be 0. So no. LLMs are not brute force dummies. We are seeing increasingly emergent behavior in frontier models. | ||
| ▲ | staticassertion 4 hours ago | parent | next [-] | |
> So no. LLMs are not brute force dummies. We are seeing increasingly emergent behavior in frontier models. Woah! That was a leap. "We are seeing ... emergent behaviors" does not follow from "it's not brute force". It is unsurprising that an LLM performs better than random! That's the whole point. It does not imply emergence. | ||
| ▲ | qsera 4 hours ago | parent | prev [-] | |
> We are seeing increasingly emergent behavior in frontier models. What? Did you see one crying? | ||