Remix.run Logo
oinfoalgo 4 days ago

I don't think it is that surprising.

It will become harder and harder for the average person to gain from newer models.

My 75 year old father loves using Sonnet. He is not asking anything though that he would be able to tell Opus is "better". The answers he gets from the current model are good enough. He is not exactly using it to probe the depths of statistical mechanics.

My father is never going to vibe code anything no matter how good the models get.

I don't think AGI would even give much different answers to what he asks.

You have to ask the model something that allows the latest model to display its improvements. I think we can see, that is just not something on the mind of the average user.

starchild3001 4 days ago | parent [-]

Correct. People claim these models "saturate" yet what saturates faster is our ability to grasp what these models are capable of.

I, for one, cannot evaluate the strength of an IMO gold vs IMO bronze models.

Soon coding capabilities might also saturate. It might all become a matter of more compute (~ # iterations), instead of more precision (~ % getting it right the first time), as the models become lightning speed, and they gain access to a playground.