Remix.run Logo
tiderpenger 2 hours ago

To justify investing a trillion dollars like everything else LLM-related. The local models are pretty good. Like I ran a test on R1 (the smallest version) vs Perplexity Pro and shockingly got better answers running on base spec Mac Mini M4. It's simply not true that there is a huge difference. Mostly it's hardcoded overoptimalization. In general these models aren't really becoming better.

mk89 2 hours ago | parent [-]

I agree with this comment here.

For me the main BIG deal is that cloud models have online search embedded etc, while this one doesn't.

However, if you don't need that (e.g., translate, summarize text, writing code) probably is good enough.

dragonwriter 21 minutes ago | parent | next [-]

> For me the main BIG deal is that cloud models have online search embedded etc, while this one doesn't.

Models do not have online search embedded, they have tool use capabilities (possibly with specialized training for a web search tool), but that's true of many open and weights-available models, and they are run with harnesses that support tools and provide a web search tool (lmstudio is such a harness, and can easily be supplied with a web search tool.)

prophesi 2 hours ago | parent | prev | next [-]

So long as the local model supports tool-use, I haven't had issues with them using web search etc in open-webui. Frontier models will just be smarter in knowing when to use tools.

mk89 2 hours ago | parent [-]

Ok I need to explore this, I didn't do it yet. Thanks.

nunodonato 33 minutes ago | parent | prev [-]

you can do web searches in lm studio. just connect an mcp that does it. Serpapi has an mcp, for example