| ▲ | hparadiz 2 hours ago | |||||||
There are way too many good uses of these models for local that I fully expect a standard workstation 10 years from now to start at 128GB of RAM and have at least a workstation inference device. | ||||||||
| ▲ | bdangubic an hour ago | parent [-] | |||||||
or if you believe a lot of HN crowd we are in AI bubble and in 10 years inference will be dirt cheap when all of this crashes and we have all this hardware in data centers and it won't make any sense to run monster workstations at home (I work 128GB M4 but not run inference, just too many electron apps running at the same time...) :) | ||||||||
| ||||||||