▲ | varenc 9 days ago | |
> You already need very high end hardware to run useful local LLMs A basic macbook can run gpt-oss-20b and it's quite useful for many tasks. And fast. Of course Macs have a huge advantage for local LLMs inference due to their shared memory architecture. |