| ▲ | crystaln 3 hours ago | |
Seems much more likely the cost will go down 99%. With open source models and architectural innovations, something like Claude will run on a local machine for free. | ||
| ▲ | walterbell an hour ago | parent [-] | |
How much RAM and SSD needed by future local inference, to be competitive with present cloud inference? | ||