| ▲ | RandomGerm4n 6 hours ago | |
I like the idea of having a user-friendly app that lets you use LLMs locally. Tools like Ollama and LMStudio tend to put most people off because you have to decide for yourself which models to use and there are so many settings to configure. If the hardware you’re using is compatible, Ensu could be a drop-in replacement for casual ChatGPT users. However, it’s a bit confusing because, for example, a larger LLM model was downloaded to my smartphone than to my computer. It would probably make the most sense if the app simply categorized devices into five different tiers and then, depending on which performance tier a device falls into, downloaded the appropriate model and simply informed the user of the performance tier. Over time, it would then be possible to periodically replace the LLM for each tier with better ones, or to redefine the device performance tiers based on hardware advancements. | ||
| ▲ | drak0n1c 6 hours ago | parent | next [-] | |
Osaurus is what I use for local - it's a native Swift macOS app with sandboxing, agent tooling, and server capabilities. Mac only though. | ||
| ▲ | jimmyjazz14 5 hours ago | parent | prev [-] | |
LLMStudio is pretty darn easy and if I recall it recommends a model to install when you first start it | ||