▲ | drusepth 4 days ago | |
Are 235B models classified as local LLMs? I guess they probably are, but others in this thread are probably looking more toward 20B-30B models and sizes that generally fit on the RAM you'd expect in average or slightly-higher-end hardware. My beefy 3D gamedev workstation with a 4090 and 128GB RAM can't even run a 235B model unless it's extremely quantized (and even then, only at like single-digit tokens/minute). |