▲ | zer00eyz 4 days ago | ||||||||||||||||||||||
> Pretty much every thing I do starts with an interaction with a neural network. Pretty much nothing I do starts this way. Look LLM's are interesting. I sure spend a lot less time writing basic one off scripts because of them. The "extra step" of tossing emails to an LLM is just proof reading with less tedium. LLMs gave every one an intern that does middling work quickly, never complains and doesn't get coffee. We need them to be cheap (to run) and localy/owned hardware (for security and copy right reasons). | |||||||||||||||||||||||
▲ | 4 days ago | parent | next [-] | ||||||||||||||||||||||
[deleted] | |||||||||||||||||||||||
▲ | ericd 4 days ago | parent | prev [-] | ||||||||||||||||||||||
If you go spend $5k on a MacBook Pro m4 max with 128 gigs of ram, and toss on Ollama with Qwen2.5-72b, you have your local LLM, free to run as much as you like. At first glance that might seem expensive, but then consider how insane it is that you can ask your laptop arbitrary questions and have it respond with really cogent answers, on almost any topic you can think of, without relying on a massive rack of gpu machines behind an api. It uses barely more power than an old incandescent bulb while doing it! | |||||||||||||||||||||||
|