| ▲ | hackermanai 3 hours ago | |
> “But Local Models Aren’t As Smart” This is what makes me continuously doubt and rewrite the local-first approach to inline chat in my editor. Next edit/ code complete makes more sense due to latency advantage. But chat is hard. It's fast and feels good to run locally, but output quality is just not ChatGPT etal. | ||