Remix.run Logo
brazukadev a day ago

Today. But what about in 5 years? Would you bet we will be paying hundreds of billions to OpenAI yearly or buying consumer GPUs? I know what I will be doing.

Dilettante_ a day ago | parent | next [-]

But the progress goes both ways: In five years, you would still want to use whatever is running on the cloud supercenters. Just like today you could run gpt-2 locally as a coding agent, but we want the 100x-as-powerful shiny thing.

mcny a day ago | parent | next [-]

That would be great if that was the case but my understanding is that the progress is plateauing. I don't know how much of this is anthorpic / Google / openAI holding itself back to save money and how much is the state of the art improvement slowing down though. I can imagine there could be a 64 GB GPU in five years as absurd as it feels to type that today.

simonw a day ago | parent | next [-]

What gives you the impression the progress is plateauing?

I'm finding the difference just between Sonnet 4 and Sonnet 4.5 to be meaningful in terms of the complexity of tasks I'm willing to use them for.

sebastiennight a day ago | parent | prev [-]

> a 64 GB GPU in five years

Is there a digit missing? I don't understand why this existing in 5 years is absurd

mcny 5 hours ago | parent [-]

I meant for me it feels absurd today but it will likely happen in five years.

brazukadev a day ago | parent | prev [-]

Not really, for many cases I'm happy using Qwen3-8B in my computer and would be very happy if I could run Qwen3-Coder-30B-A3B.

infecto a day ago | parent | prev | next [-]

Paying for compute in the cloud. That’s what I am betting on. Multiple providers, different data center players. There may be healthy margins for them but I would bet it’s always going to be relatively cheaper for me to pay for the compute rather than manage it myself.

alfiedotwtf 20 hours ago | parent | prev [-]

Woah, woah, woah. I thought in 5 years time we would all be out of a job lol