Remix.run Logo
everlier 9 hours ago

There was never a better time to run LLMs locally. It's just a few commands from zero till a fully working LLM homelab.

``` harbor pull unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_XL

# Open WebUI -> llama.cpp + SearXNG for Web RAG + OpenTerminal as sandbox harbor up searxng webui llamacpp openterminal ```

That's it, it's already better than Claude's or ChatGPT's app.