▲ | crazygringo 4 days ago | |
I see local LLM's being used mainly for automation as opposed to factual knowledge -- for classification, summarization, search, and things like grammar checking. So they need to be smart about your desired language(s) and all the everyday concepts we use in it (so they can understand the content of documents and messages), but they don't need any of the detailed factual knowledge around human history, programming languages and libraries, health, and everything else. The idea is that you don't prompt the LLM directly, but your OS tools make use of it, and applications prompt it as frequently as they fetch URL's. | ||
▲ | theshrike79 3 days ago | parent [-] | |
And local models are static, predictable and don't just go away when a new one comes out. This makes them perfect for automation tasks. |