Remix.run Logo
Aperocky 2 hours ago

> The AI is coming for that too.

That's where we fundamentally disagree about.

Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.

Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.

bel8 2 hours ago | parent | next [-]

> because it is actually a statistical machine with context limit.

So are humans.

Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)

And I argue that current LLMs have surpassed many of my capabilities already.

For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.

KalMann 24 minutes ago | parent | next [-]

> So are humans.

AI advocates are _way_ too confident about the nature human cognition. Questions that have been debated by philosophers and cognitive scientists for decades are now "obvious" according to you people, though you never provide any argument to support your statements.

Aperocky an hour ago | parent | prev [-]

We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.

We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.

I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.

bel8 an hour ago | parent [-]

> We are not pre-trained using the summary of all human knowledge over all of history.

But we are? That's our education system.

The only reason school doesn't try to shove more information in our brains is because we hit bandwidth limits.

KalMann 21 minutes ago | parent [-]

> But we are? That's our education system.

That is not what the education system does. That's an obvious distortion of reality. People train over billions of documents to statistically predict the next word to gain and understanding of language. LLMs do this statistical processing in order to mimic humans natural language learning ability. And there has been continued evidence of the limitations of this approach to accurately mimic the totality of human cognition.

itsafarqueue 2 hours ago | parent | prev | next [-]

Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.

Aperocky an hour ago | parent [-]

I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.

coldtea 2 hours ago | parent | prev [-]

>but not all of it, because it is actually a statistical machine with context limit.

And the human mind is not?

KalMann 16 minutes ago | parent | next [-]

I can give you the exact mathematical formula used to statistically optimize the output of a neural network from input examples. Can you do the same for the brain?

nothinkjustai 2 hours ago | parent | prev [-]

It’s not.