| ▲ | segmondy 3 hours ago | |||||||
Some of you folks on here love to argue, gpt-oss-120b was trained in 4 bits, so it pretty much takes up 60gb. | ||||||||
| ▲ | Aurornis 2 hours ago | parent [-] | |||||||
Good point, but you still need KV cache and more. Fitting the model alone to RAM doesn’t get the job done. | ||||||||
| ||||||||