Remix.run Logo
hkt 5 days ago

It seems plausible enough that they're trying to squeeze as much out of their hardware as possible and getting the balance wrong. As prices for hardware capable of running local LLMs drop and local models improve, this will become less prevalent and the option of running your own will become more widespread, probably killing this kind of service outside of enterprise. Even if it doesn't kill that service, it'll be _considerably_ better to be operating your own as you have control over what is actually running.

On that note, I strongly recommend qwen3:4b. It is _bonkers_ how good it is, especially considering how relatively tiny it is.

j45 5 days ago | parent [-]

Thanks. Mind sharing which kinds of Claude tasks you are able to run on qwen3:4b?