Remix.run Logo
p-e-w 5 hours ago

> The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.

But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.

Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.

[1] https://arxiv.org/abs/2301.02679

barishnamazov 4 hours ago | parent | next [-]

I'm a believer that LLMs will keep getting better. But even today (which might or might not be "sufficient" training) they can easily run `rm -rf ~`.

Not that humans can't make these mistakes (in fact, I have nuked my home directory myself before), but I don't think it's a specific problem some guardrails can solve currently. I'm looking for innovations (either model-wise or engineering-wise) that'd do better than letting an agent run code until a goal is seemingly achieved.

encyclopedism 4 hours ago | parent | prev [-]

LLM's have surpassed being Turing machines? Turing machines now think?

LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.