| ▲ | Workaccount2 3 hours ago | |
Never, local models are for hobby and (extreme) privacy concerns. A less paranoid and much more economically efficient approach would be to just lease a server and run the models on that. | ||
| ▲ | g947o 7 minutes ago | parent [-] | |
This. I spent quite some time on r/LocalLLaMA and yet need to see a convincing "success story" of productively using local models to replace GPT/Claude etc. | ||