Remix.run Logo
beeflet 19 hours ago

And many of them so unexpected, given the unusual nature of their intellegence emerging from language prediction. They excel wherever you need to digest or produce massive amounts of text. They can synthesize some pretty impressive solutions from pre-existing stuff. Hell, I use it like a thesaurus to sus out words or phrases that are new or on the tip of my tounge. They have a great hold on the general corpus of information, much better than any search engine (even before the internet was cluttered with their output). It's much easier to find concrete words for what you're looking for through an indirect search via an LLM. The fact that, say, a 32GB model seemingly holds approximate knowlege of everything implies some unexplored relationship between inteligence and compression.

What they can't they do? Pretty much anything reliably or unsupervised. But then again, who can?

They also tend to fail creatively, given their synthesize existing ideas. And with things involving physical intuition. And tasks involving meta-knowlege of their tokens (like asking them how long a given word is). And they tend to yap too much for my liking (perhaps this could be fixed with an additional thinking stage to increase terseness before reporting to the user)

saalweachter 18 hours ago | parent | next [-]

My current way of thinking about LLMs is "an echo of human intelligence embedded in language".

It's kind of like in those sci fi or fantasy stories where someone dies and what's left behind as a ghost in the ether or the machine isn't actually them; it's just an echo, an shallow, incomplete copy.

Lwerewolf 18 hours ago | parent | next [-]

Just dust and echoes.

(:

cgg23 6 hours ago | parent | prev [-]

Residue ;)

mikestorrent 3 hours ago | parent | prev [-]

> some unexplored relationship between inteligence and compression.

I don't think it's unexplored at all, this is basically what information theory is all about. At some level, it becomes incompressible....