▲ | dingnuts 5 days ago | |
> amazing for old code you've never seen not if you have too much! a few hundred thousand lines of code and you can't ask shit! plus, you just handed over your company's entire IP to whoever hosts your model | ||
▲ | giancarlostoro 5 days ago | parent | next [-] | |
If Apple keeps improving things, you can run the model locally. I'm able to run models on my Macbook with an M4 that I can't even run on my 3080 GPU (mostly due to VRAM constraints) but they run reasonably fast, would the 3080 be faster? Sure, but its also plenty fast to where I'm not sitting there waiting longer than I wait for a cloud model to "reason" and look things up. I think the biggest thing for offline LLMs will have to be consistency for having them search the web with an API like Google's or some other search engines API, maybe Kagi could provide an API for people who self-host LLMs (not necessarily for free, but it would still be useful). | ||
▲ | miohtama 5 days ago | parent | prev [-] | |
It's a fair trade off for smaller companies where IP or the software is necessary evil, not the main unique value added. It's hard to see what evil would anyone do with crappy legacy code. The IP risks taken may be well worth of productiviry boosts. |