| ▲ | bryancoxwell 2 hours ago | |
I’m not up to date on local models, but is that clear? | ||
| ▲ | literalAardvark 2 hours ago | parent | next [-] | |
Gemma4:e4b is crazy good and quite usable on 10 years old midrange hardware. Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations. The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex. But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything. | ||
| ▲ | le-mark 2 hours ago | parent | prev | next [-] | |
Local models are 6-12 months behind the “frontier” models. This mean anthropic, openai, and google don’t have a moat, they’re on a treadmill running to stay ahead. Treadmills don’t justify their valuation. | ||
| ▲ | 2 hours ago | parent | prev [-] | |
| [deleted] | ||