Remix.run Logo
ml_basics 4 hours ago

this will change as inference demand increases (which is happening right now faster than many people expected)

ainch 33 minutes ago | parent | next [-]

At the same time, the training paradigm being scaled, Reinforcement Learning, is significantly less data-efficient than next-token prediction. You basically need to run an agent for minutes (or longer if you want good long-horizon performance), only to give it a binary pass/fail - one bit of information.

Inference compute is definitely scaling fast, but to scale RL, training and R&D compute also needs to scale hard. I don't think it's obvious that inference will overtake R&D/training, unless there's a reputable source that states that.

vb-8448 3 hours ago | parent | prev [-]

do you have some ref?