Remix.run Logo
bhl 6 days ago

The moat is people, data, and compute in that order.

It’s not just compute. That has mostly plateaued. What matters now is quality of data and what type of experiments to run, which environments to build.

sigmoid10 6 days ago | parent | next [-]

This "moat" is actually constantly shifting (which is why it isn't really a moat to begin with). Originally, it was all about quality data sources. But that saturated quite some time ago (at least for text). Before RLHF/RLAIF it was primarily a race who could throw more compute at a model and train longer on the same data. Then it was who could come up with the best RL approach. Now we're back to who can throw more compute at it since everyone is once again doing pretty much the same thing. With reasoning we now also opened a second avenue where it's all about who can throw more compute at it during runtime and not just while training. So in the end, it's mostly about compute. The last years have taught us that any significant algorithmic improvement will soon permeate across the entire field, no matter who originally invented it. So people are important for finding this stuff, but not for making the most of it. On top of that, I think we are very close to the point where LLMs can compete with humans on their own algorithmic development. Then it will be even more about who can spend more compute, because there will be tons of ideas to evaluate.

DrScientist 6 days ago | parent | next [-]

To put that into a scientific context - compute is capacity to do experiments and generate data ( about how best to build models ).

However I do think you are missing an important aspect - and that's people who properly understand important solvable problems.

ie I see quite a bit "we will solve this x, with AI' from startup's that don't fundamentally understand x.

sigmoid10 6 days ago | parent [-]

>we will solve this x, with AI

You usually see this from startup techbro CEOs understand neither x nor AI. Those people are already replacable by AI today. The kind of people who think they can query ChatGPT once with "How to create a cutting edge model" and make millions. But when you go in on the deep end, there are very few people who still have enough tech knowledge to compete with your average modern LLM. And even the Math Olympiad gold medalists high-flyers at DeepSeek are about to have a run for their money with the next generation. Current AI engineers will shift more and more towards senior architecture and PM roles, because those will be the only ones that matter. But PM and architecture is already something that you could replace today.

bhl 5 days ago | parent | prev [-]

> Originally, it was all about quality data sources.

It still is! Lots of vertical productivity data that would be expensive to acquire manually via humans will be captured by building vertical AI products. Think lawyers, doctors, engineers.

sigmoid10 3 days ago | parent [-]

That's literally what RLAIF has been doing for a while now.

ActionHank 6 days ago | parent | prev [-]

People matter less and less as well.

As more opens up in OSS and academic space, their knowledge and experience will either be shared, rediscovered, or become obsolete.

Also many of the people are coasting on one or two key discoveries by a handful of people years ago. When Zuck figures this out he gonna be so mad.

bhl 5 days ago | parent [-]

Not all will become OSS. Some will become products, and that requires the best people.