Remix.run Logo
throwaway27448 5 hours ago

Even at orders of magnitude greater speed, we've still hit diminishing returns for quality of output. We simply haven't found anything like superhuman reasoning ability, just superhuman (potentially) reasoning speed.

LarsDu88 3 hours ago | parent | next [-]

I disagree with this. Reinforcement learning with verifiable rewards training is actually the secret sauce that is leading Claude and GPT to automating software engineering tasks.

All the easily verifiable domains such as mathematics, coding, and things that can be run inside a reasonable simulation are falling very very fast.

By next year if not sooner, mathematicians will be wildly outpaced by LLMs for reasoning.

Alex_L_Wood an hour ago | parent | next [-]

Coding is anything but “easily” verifiable.

LarsDu88 24 minutes ago | parent [-]

It's extremely verifiable. The reinforcement finetuning strategy I'm referring to involves LLM creating coding tasks with an expected output, implementing the code, and then having a compiler (or interpreter in the case of languages like python) succeed or fail to run the code. Then compare the output to expected output. The verification process (run interpreter + run test) can be done in seconds. One can generate millions of datasets like this for free and there is extensive research showing with the right policy, an agent will be able to learn to reason - first as good as human, and in many cases superior to a human.

2 hours ago | parent | prev [-]
[deleted]
energy123 4 hours ago | parent | prev | next [-]

It's not that easy to assess diminishing returns with saturated benchmarks where asymptoting to 100% is mathematically baked in. I could point to the number of Erdos proofs being solved by AI going from 0 to many very recently as evidence for acceleration.

throwaway27448 2 hours ago | parent [-]

That is not evidence of acceleration, just of some measurable improvement compared to a previous model. After all, humans have made these breakthroughs since before recorded history—that never by itself implied accelerating intelligence.

horsawlarway 4 hours ago | parent | prev [-]

Possibly - but we've also seen that spending more tokens on a task can improve the quality of the output (reasoning, CoT, etc).

So it's not impossible to have things that seem orthogonal, like generation speed or context length, have an impact on quality of result.