▲ | pavel_lishin a day ago | |||||||
> …all things that on-device LLMs can already do, for example my MacBook can run Llama 4 (albeit slowly) and it can generate recipes for me. I've run a local LLM, and while I probably didn't do a great job optimizing things, it was crawling. I would absolutely not stand there for 20 minutes while my fridge stutters out a recipe for kotleti, while probably getting some of it wrong and requiring a re-prompt. Not everything needs to be a genie. | ||||||||
▲ | holtkam2 a day ago | parent | next [-] | |||||||
I guess I was thinking about a smart fridge of the type you’d find in the year, say, 2031. | ||||||||
| ||||||||
▲ | maplethorpe a day ago | parent | prev [-] | |||||||
How many GPUs were you running? | ||||||||
|