Remix.run Logo
AYBABTME 2 hours ago

This right now today is making the case for OSS AI and local inference. 200$/m to get rate limited makes a RTX 6000 Pro look cheap.

re-thc 7 minutes ago | parent | next [-]

What’s the depreciation on that RTX 6000 though?

New hardware keeps on coming with large gains in performance.

tmountain 2 hours ago | parent | prev [-]

How well do local OSS models stack up to Claude?

Balinares 25 minutes ago | parent | next [-]

Very well for narrowly scoped purposes.

They decohere much faster as the context grows. Which is fine, or not, depending on whether you consider yourself a software engineer amplifying your output by automating the boilerplate, or an LLM cornac.

sunaookami an hour ago | parent | prev [-]

They don't, only on meaningless benchmarks.