| ▲ | sarthaksoni 9 days ago |
| Reading this made me realize how easy it is to set up GPT-OSS 20B in comparison. I had it running on my Mac in five minutes, thanks to Llama. |
|
| ▲ | DrPhish 9 days ago | parent | next [-] |
| Its also easy to do 120b on CPU if you have the resources. I had 120b running on my home LLM CPU inference box in just as long as it took to download the GGUFs, git pull and rebuild llama-server.
I had it running at 40t/s with zero effort and 50t/s with a brief tweaking.
Its just too bad that even the 120b isn't really worth running compared to the other models that are out there. It really is amazing what ggerganov and the llama.cpp team have done to democratize LLMs for individuals that can't afford a massive GPU farm worth more than the average annual salary. |
| |
| ▲ | wkat4242 9 days ago | parent | next [-] | | What hardware do you have? 50tk/s is really impressive for cpu. | | |
| ▲ | DrPhish 9 days ago | parent | next [-] | | 2xEPYC Genoa w/768GB of DDR5-4800 and an A5000 24GB card.
I built it in January 2024 for about $6k and have thoroughly enjoyed running every new model as it gets released. Some of the best money I’ve ever spent. | | |
| ▲ | testaburger 9 days ago | parent | next [-] | | Which specific model epcys? And if it's not too much to ask which motherboard and power supply? I'm really interested in building something similar | | | |
| ▲ | fouc 8 days ago | parent | prev | next [-] | | I've seen some mentions of pure-cpu setups being successful for large models using old epyc/xeon workstations off ebay with 40+ cpus. Interesting approach! | |
| ▲ | wkat4242 9 days ago | parent | prev | next [-] | | Wow nice!! That's a really good deal for that much hardware. How many tokens/s do you get for DeepSeek-R1? | | |
| ▲ | DrPhish 8 days ago | parent [-] | | Thanks, it was a bit of a gamble at the time (lots of dodgy ebay parts), but it paid off. R1 starts at about 10t/s on an empty context but quickly falls off. I'd say the majority of my tokens are generating around 6t/s. Some of the other big MoE models can be quite a bit faster. I'm mostly using QwenCoder 480b at Q8 these days for 9t/s average. I've found I get better real-world results out of it than K2, R1 or GLM4.5. |
| |
| ▲ | ekianjo 9 days ago | parent | prev [-] | | thats a r/localllama user right there |
| |
| ▲ | SirMaster 8 days ago | parent | prev [-] | | I'm getting 20 tokens/sec on the 120B model with a 5060Ti 16GB and a regular desktop Ryzen 7800x3d with 64GB of DDR5-6000. | | |
| ▲ | wkat4242 8 days ago | parent [-] | | Wow that's not bad. It's strange, for me it is much much slower on a Radeon Pro VII (also 16GB, with a memory bandwidth of 1TB/s!) and a Ryzen 5 5600 with also 64GB. It's basically unworkably slow. Also, I only get 100% CPU when I check ollama ps, the GPU is not being used at all :( It's also counterproductive because the model is just too large for 64GB. I wonder what makes it work so well on yours! My CPU isn't much slower and my GPU probably faster. | | |
| ▲ | magicalhippo 8 days ago | parent [-] | | AMD basically decided they wanted to focus on HPC and data center customers rather than consumers, and so GPGPU driver support for consumer cards has been
non-existing or terrible[1]. [1]: https://github.com/ROCm/ROCm/discussions/3893 | | |
| ▲ | wkat4242 6 days ago | parent [-] | | The Radeon VII Pro is not a consumer card though and works well with ROCm. It even has datacenter "grade" HBM2 memory that most Nvidias don't have. The continuing support has been dropped but ROCm of course still works fine. It's nearly as fast in Ollama as my 4090 (which I don't use for AI regularly but I just play with it sometimes) |
|
|
|
| |
| ▲ | exe34 9 days ago | parent | prev [-] | | I imagine the gguf is quantised stuff? | | |
|
|
| ▲ | amelius 9 days ago | parent | prev | next [-] |
| Why is it hard to set up llms? You can just ask an llm to do it for you, no? If this relatively simple task is already too much for llms then what good are they? |
| |
| ▲ | diggan 9 days ago | parent [-] | | In the case of the GPT-OSS models, the worst (time consuming) part of supporting it is the new format they've trained the model with, "OpenAI harmony", in my own clients I couldn't just replace the model and call it a day, but still working on getting then to work correctly with tool calling... |
|
|
| ▲ | CraigRood 9 days ago | parent | prev | next [-] |
| I was playing with it yesterday and every single session gave me factually incorrect information. Speed and ease of use is one thing, but it shouldn't be at the cost of accuracy. |
| |
| ▲ | OliverGuy 9 days ago | parent [-] | | If you are trying to get facts out of an LLM you are using it wrong, if you want a fact it should use a tool (eg we search, rag etc) to get the information that contains the fact (Wikipedia page, documentation etc) and then parse that document for the fact and return it to you. | | |
| ▲ | CraigRood 5 days ago | parent [-] | | These tools are literally being marketed as AI, yet it presents false information as fact. 'using it wrong' can't be an argument here. I would rather then tool is honest about confidence levels and mechanisms to research further - then feed that fact back into 'AI' for the next step. |
|
|
|
| ▲ | LoganDark 8 days ago | parent | prev [-] |
| 120B is pretty easy to run too, if you have enough memory. |