Remix.run Logo
kingkongjaffa a day ago

Can it really run local LLMs at any decent size? I find it hard to believe for the price it can run anything more than 7b or 8b models slowly.

evil-olive a day ago | parent | next [-]

according to [0] it looks like the "Umbrel Home" device they sell (with 16GB RAM and an N150 CPU) can run a 7B model at 2.7 tokens/sec, or a 13B model at 1.5 t/s.

especially when they seem to be aiming for a not-terribly-technical market segment, there seems to be a pretty big mismatch between that performance and their website claims:

> The most transformative technology of our generation shouldn't be confined to corporate data centers. Umbrel Home democratizes access to AI, allowing you to run powerful models on a device you own and control.

0: https://github.com/getumbrel/llama-gpt?tab=readme-ov-file#be...

jazzyjackson a day ago | parent | prev | next [-]

Wow that's wild that they advertise "Run Deepseek-R1 locally" when the screenshot in the app store refers to "DeepSeek-R1-0528-Qwen3-8B"

sosodev a day ago | parent | prev [-]

It’s all subjective. Personally I think it would border on useless for local inference but maybe some people are happy with low quality models at slow speeds.