| ▲ | politelemon 8 hours ago | |
Feasibility on commodity hardware would be the true watermark. Running high end computers is the only way to get decent results at the moment, but if we can run inference on CPUs, NPUs, and GPUs on everyday hardware, the moat should disappear. | ||
| ▲ | zozbot234 7 hours ago | parent [-] | |
You can already run inference on ordinary hardware but if you want workable throughput you're limited to small models, and these have very poor world-knowledge. | ||