| ▲ | lambda 2 hours ago | |||||||
Yeah, this is an AMD laptop integrated GPU, not a discrete NVIDIA GPU on a desktop. Also, I haven't really done much to try tweaking performance, this is just the first setup I've gotten that works. | ||||||||
| ▲ | nyrikki 2 hours ago | parent [-] | |||||||
The memory bandwidth of the Laptop CPU is better for fine tuning, but MoE really works well for inference. I won’t use a public model for my secret sauce, no reason to help the foundation models on my secret sauce. Even an old 1080ti works well for FIM for IDEs. IMHO the above setup works well for boilerplate and even the sota models fail for the domain specific portions. While I lucked out and foresaw the huge price increases, you can still find some good deals. Old gaming computers work pretty well, especially if you have Claude code locally churn on the boring parts while you work on the hard parts. | ||||||||
| ||||||||