| ▲ | dragonwriter 5 days ago |
| > but if inference is this cheap then why aren't there multiple API providers offering models at dirt cheap prices There are multiple API providers offering models at dirt cheap prices, enough so that there is at least one well-known API provider that is an aggreggator of other API providers that offers lots of models at $0. > The only cheap-ass providers I've seen only run tiny models. Where's my cheap deepseek-R1? https://openrouter.ai/deepseek/deepseek-r1-0528:free |
|
| ▲ | idiotsecant 5 days ago | parent | next [-] |
| How is this possible? I imagine someone is finding some value in the prompts themselves but this cant possibly be paying for itself. |
| |
| ▲ | tick_tock_tick 5 days ago | parent [-] | | Inference is just that cheap plus they hope that you'll start using the ones they charge for as you become more used to using AI in your workflow. |
|
|
| ▲ | booi 5 days ago | parent | prev [-] |
| you can also run deepseek for free on a modestly sized laptop |
| |
| ▲ | dragonwriter 5 days ago | parent | next [-] | | At 4-bit quant, R1 takes 300+ gigs just for weights. You can certainly run smaller models into which R1 has been distilled on a modest laptop, but I don't see how you can run R1 itself on anything that wouldn't be considered extreme for a laptop in at least one dimension. | |
| ▲ | svachalek 5 days ago | parent | prev [-] | | You're probably thinking of what ollama labels "deepseek" which is not in fact deepseek, but other models with some deepseek distilled into them. |
|