| ▲ | rTX5CMRXIfFG 3 hours ago | |||||||||||||||||||||||||||||||
Affordability of hardware that can run local LLMs is a real factor, too. Not sure when RAM prices are going down, but with everything that’s happening and can happen in the world right now, it doesn’t look like it’ll drop in the near or medium-term | ||||||||||||||||||||||||||||||||
| ▲ | wahnfrieden 3 hours ago | parent [-] | |||||||||||||||||||||||||||||||
No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale or in large orgs. Even with cheap RAM, you will still need a very large budget for frontier-level capability. Open models that are competitive with frontier will be used on shared hosts. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||