| ▲ | emporas 5 hours ago | |||||||
Code is so low entropy that smaller and more economical models will be up to the task the same as gigantic models from big providers are today. No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case. In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free. | ||||||||
| ▲ | Towaway69 5 hours ago | parent [-] | |||||||
> the open source invisible hand makes everything completely free. In this case the limitation is the compute. Very few people have the compute required for AI/LLMs locally or for free (comparable to the performance of Claude). So yes, there are plenty of Open Source models that can be used locally but you need to invest in hardware to make that happen and especially if you want the quality that is available from the commercial offerings. Not to speak of the training of those models. It's all there to make it possible to do this locally however where's the hardware? AWS? Google? There are hidden costs of the Open Source model in this case. | ||||||||
| ||||||||