Remix.run Logo
threethirtytwo 3 hours ago

I don’t think you’re rational. Part of being able to be unbiased is to see it in yourself.

First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.

So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.

The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.

project2501a 3 hours ago | parent [-]

> Nobody knows how LLMs work.

I'm sorry, come again?

threethirtytwo 2 hours ago | parent | next [-]

Nobody knows how LLMs work.

Anybody who claims otherwise is making a false claim.

NateEag 2 hours ago | parent | prev | next [-]

I think they meant "Nobody knows why LLMs work."

threethirtytwo 2 hours ago | parent | next [-]

same thing? The how is not explainable. This is just pedantic. Nobody understands LLMs.

measurablefunc 2 hours ago | parent | prev [-]

Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.

bdangubic 2 hours ago | parent | prev [-]

nobody can how how something that is non-deterministic works - by its pure definition