| ▲ | Workaccount2 4 hours ago | |
IIRC people have gotten LLMs to run on '80s hardware. Inference isn't overly compute heavy. The killer really is training, which is insanely compute intensive and really only recently hardware practical on the scale needed. | ||
| ▲ | adgjlsfhk1 25 minutes ago | parent [-] | |
you could probably train a gpt 2 sized model with sota architecture on a 2008 supercomputer. it would take a while though. | ||