Remix.run Logo
derefr 3 days ago

> Anecdotally moving from model to model I'm not seeing huge changes in many use cases.

Probably because you're doing things that are hitting mostly the "well-established" behaviors of these models — the ones that have been stable for at least a full model-generation now, that the AI bigcorps are currently happy keeping stable (since they achieved 100% on some previous benchmark for those behaviors, and changing them now would be a regression per those benchmarks.)

Meanwhile, the AI bigcorps are focusing on extending these models' capabilities at the edge/frontier, to get them to do things they can't currently do. (Mostly this is inside-baseball stuff to "make the model better as a tool for enhancing the model": ever-better domain-specific analysis capabilities, to "logic out" whether training data belongs in the training corpus for some fine-tune; and domain-specific synthesis capabilities, to procedurally generate unbounded amounts of useful fine-tuning corpus for specific tasks, ala AlphaZero playing unbounded amounts of Go games against itself to learn on.)

This means that the models are getting constantly bigger. And this is unsustainable. So, obviously, the goal here is to go through this as a transitionary bootstrap phase, to reach some goal that allows the size of the models to be reduced.

IMHO these models will mostly stay stable-looking for their established consumer-facing use-cases, while slowly expanding TAM "in the background" into new domain-specific use-cases (e.g. constructing novel math proofs in iterative cooperation with a prover) — until eventually, the sum of those added domain-specific capabilities will turn out to have all along doubled as a toolkit these companies were slowly building to "use models to analyze models" — allowing the AI bigcorps to apply models to the task of optimizing models down to something that run with positive-margin OpEx on whatever hardware that would be available at that time 5+ years down the line.

And then we'll see them turn to genuinely improving the model behavior for consumer use-cases again; because only at that point will they genuinely be making money by scaling consumer usage — rather than treating consumer usage purely as a marketing loss-leader paid for by the professional usage + ongoing capital investment that that consumer usage inspires.

Workaccount2 3 days ago | parent | next [-]

>Mostly this is inside-baseball stuff to "make the model better as a tool for enhancing the model"

Last week I put GPT-5 and Gemini 2.5 in a conversation with each other about a topic of GPT-5's choosing. What did it pick?

Improving LLMs.

The conversation was far over my head, but the two seemed to be readily able to get deep into the weeds on it.

I took it as a pretty strong signal that they have an extensive training set of transformer/LLM tech.

temp0826 2 days ago | parent [-]

Like trying to have a lunch conversation with coworkers about anything other than work

StephenHerlihyy 2 days ago | parent | prev | next [-]

My understanding is that model are already merely a confederation of many smaller sub-models being used as "tools" to derive answers. I am surprised that it took us this long to solve the "AI + Microservices = GOLD!" equation.

kdmtctl 3 days ago | parent | prev [-]

You have just described a singularity point for this line of business. Which could happen. Or not.

derefr 3 days ago | parent [-]

I wouldn't describe it as a singularity point. I don't mean that they'll get models to design better model architectures, or come up with feature improvements for the inference/training host frameworks, etc.

Instead, I mean that these later-generation models will be able to be fine-tuned to do things like e.g. recognizing and discretizing "feature circuits" out of the larger model NN into algorithms, such that humans can then simplify these algorithms (representing the fuzzy / incomplete understanding a model learned of a regular digital-logic algorithm) into regular code; expose this code as primitives/intrinsics the inference kernel has access to (e.g. by having output vectors where every odd position represents a primitive operation to be applied before the next attention pass, and every even position represents a parameter for the preceding operation to take); cut out the original circuits recognized by the discretization model, substituting simple layer passthrough with calls to these operations; continue training from there, to collect new, higher-level circuits that use these operations; extract + burn in + reference those; and so on; and then, after some amount of this, go back and re-train the model from the beginning with all these gained operations already being available from the start, "for effect."

Note that human ingenuity is still required at several places in this loop; you can't make a model do this kind of recursive accelerator derivation to itself without any cross-checking, and still expect to get a good result out the other end. (You could, if you could take the accumulated intuition and experience of an ISA designer that guides them to pick the set of CISC instructions to actually increase FLOPS-per-watt rather than just "pushing food around on the plate" — but long explanations or arguments about ISA design, aren't the type of thing that makes it onto the public Internet; and even if they did, there just aren't enough ISAs that have ever been designed for a brute-force learner like an LLM to actually learn any lessons from such discussions. You'd need a type of agent that can make good inferences from far less training data — which is, for now, a human.)