| ▲ | Aurornis 10 days ago |
| I think the local LLM scene is very fun and I enjoy following what people do. However every time I run local models on my MacBook Pro with a ton of RAM, I’m reminded of the gap between local hosted models and the frontier models that I can get for $20/month or nominal price per token from different providers. The difference in speed and quality is massive. The current local models are very impressive, but they’re still a big step behind the SaaS frontier models. I feel like the benchmark charts don’t capture this gap well, presumably because the models are trained to perform well on those benchmarks. I already find the frontier models from OpenAI and Anthropic to be slow and frequently error prone, so dropping speed and quality even further isn’t attractive. I agree that it’s fun as a hobby or for people who can’t or won’t take any privacy risks. For me, I’d rather wait and see what an M5 or M6 MacBook Pro with 128GB of RAM can do before I start trying to put together another dedicated purchase for LLMs. |
|
| ▲ | jauntywundrkind 10 days ago | parent | next [-] |
| I agree and disagree. Many of the best models are open source, just too big to run for most people. And there are plenty of ways to fit these models! A Mac Studio M3 Ultra with 512 GB unified memory though has huge capacity, and a decent chunk of bandwidth (800GB/s. Compare vs a 5090's ~1800GB/s). $10k is a lot of money, but that ability to fit these very large models & get quality results is very impressive. Performance is even less, but a single AMD Turin chip with it's 12-channels DDR5-6000 can get you to almost 600GB/s: a 12x 64GB (768GB) build is gonna be $4000+ in ram costs, plus $4800 for for example a 48 core Turin to go with it. (But if you go to older generations, affordability goes way up! Special part, but the 48-core 7R13 is <$1000). Still, those costs come to $5000 at the low end. And come with much less token/s. The "grid compute" "utility compute" "cloud compute" model of getting work done on a hot gpu with a model already on it by someone else is very very direct & clear. And are very big investments. It's just not likely any of us will have anything but burst demands for GPUs, so structurally it makes sense. But it really feels like there's only small things getting in the way of running big models at home! Strix Halo is kind of close. 96GB usable memory isn't quite enough to really do the thing though (and only 256GB/s). Even if/when they put the new 64GB DDR5 onto the platform (for 256GB, lets say 224 usable), one still has to sacrifice quality some to fit 400B+ models. Next gen Medusa Halo is not coming for a while, but goes from 4->6 channels, so 384GB total: not bad. (It sucks that PCIe is so slow. PCIe 5.0 is only 64GB/s one-direction. Compared to the need here, it's no-where near enough to have a big memory host and smaller memory gpu) |
| |
| ▲ | Aurornis 9 days ago | parent | next [-] | | > Many of the best models are open source, just too big to run for most people. You can find all of the open models hosted across different providers. You can pay per token to try them out. I just don't see the open models as being at the same quality level as the best from Anthropic and OpenAI. They're good but in my experience they're not as good as the benchmarks would suggest. > $10k is a lot of money, but that ability to fit these very large models & get quality results is very impressive. This is why I only appreciate the local LLM scene from a distance. It’s really cool that this can be done, but $10K to run lower quality models at slower speeds is a hard sell. I can rent a lot of hours on an on-demand cloud server for a lot less than that price or I can pay $20-$200/month and get great performance and good quality from Anthropic. I think the local LLM scene is fun where it intersects with hardware I would buy anyway (MacBook Pro with a lot of RAM) but spending $10K to run open models locally is a very expensive hobby. | |
| ▲ | jstummbillig 10 days ago | parent | prev | next [-] | | > Many of the best models are open source, just too big to run for most people I don't think that's a likely future, when you consider all the big players doing enormous infrastructure projects and the money that this increasingly demands. Powerful LLMs are simply not a great open source candidate. The models are not a by-product of the bigger thing you do. They are the bigger thing. Open sourcing a LLM means you are essentially investing money to just give it away. That simply does not make a lot of sense from a business perspective. You can do that in a limited fashion for a limited time, for example when you are scaling or it's not really your core business and you just write it off as expenses, while you try to figure yet another thing out (looking at you Meta). But with the current paradigm, one thing seems to be very clear: Building and running ever bigger LLMs is a money burning machine the likes of which we have rarely or ever seen, and operating that machine at a loss will make you run out of any amount of money really, really fast. | |
| ▲ | esseph 10 days ago | parent | prev | next [-] | | https://pcisig.com/pci-sig-announces-pcie-80-specification-t... From 2003-2016, 13 years, we had PCIE 1,2,3. 2017 - PCIE 4.0 2019 - PCIE 5.0 2022 - PCIE 6.0 2025 - PCIE 7.0 2028 - PCIE 8.0 Manufacturing and vendors are having a hard time keeping up. And the PCIE 5.0 memory is.. not always the most stable. | | |
| ▲ | dcrazy 9 days ago | parent | next [-] | | Are you conflating GDDR5x with PCIe 5.0? | | |
| ▲ | esseph 9 days ago | parent [-] | | No. I'm saying we're due for faster memory but seem to be having trouble scaling bus speeds as well (in production) and reliable memory. And the network is changing a lot, too. It's a neverending cycle I guess. | | |
| ▲ | dcrazy 9 days ago | parent [-] | | One advantage of Apple Silicon is the unified memory architecture. You put memory on the fabric instead of on PCIe. |
|
| |
| ▲ | jauntywundrkind 9 days ago | parent | prev [-] | | Thanks for the numbers. Valuable contribution for sure!! There's been a huge lag for PCIe adoption, and imo so so much has boiled down "do people need it"? In the past 10 years I feel like my eyes have been opened that every high tech company's greatest highest most compelling desire is to slow walk the release out. To move as slow as the market will bear, to do as little as possible, to roll on and on with minor incremental changes. There are canonball moments where the market is disrupted. Thank the fucking stars Intel got sick of all this shit and worked hard (with many others) to standardized NVMe, to make a post SATA world with higher speeds & better protocol. AMD64 architecture changed the game. Ryzen again. But so much of the industry is about retaining your cost advantage, is about retaining strong market segmentations, by never shipping too many PCIe lane platforms, by limiting consumer vs workstation vs server video card ram and vgpu (and mxgpu) and display out capabilities often entirely artificially. But there is a fucking fire right now and everyone knows it. Nvlink is massively more bandwidth and massively more efficient and is essential to system performance. The need to get better fast is so on. Seems like for now SSD will keep slow walking their 2x's. But PCIe is facing a real crisis of being replaced, and everyone wants better. And hates hates hates the insane cost. PCIe 8.0 is going to be insane data to push over a differential, insane speed. But we have to. Alas PCIe is also hampered by relatively generous broader system design. The trace distances are going to shrink, signal requirements increase a lot. But this needing a intercompatible compliance program for any peripheral to work is a significant disadvantage, versus, just make this point to point link work between these two cards. There's so many energies happening right now in interconnect. I hope we see some actual uptake, some day. We've had so long for Gen-Z (Ethernet phy, gone now), CXL (3.x being switched, still un-arriced), now UltraEthernet and UltraLink. Man I hope we can see some step improvements. Everyone knows we are in deep shit if NV alone can connect systems. Ironically AMD's HyperTransport was open, was a path towards this, but now Infinity Fabric is an internal only thing and as branding & an idea vanishing from the world kind of, feels insufficient. | | |
| ▲ | esseph 9 days ago | parent [-] | | All of these extremely high end technologies are so far away from hitting the consumer market. Is there any desire for most people? What's the TAM? | | |
| ▲ | jauntywundrkind 9 days ago | parent | next [-] | | Classic economics thinking: totally fucked "faster horses" thinking. The addressable market depends on the advantage. Which right now: we don't know. It's all a guess that someone is going to find it valuable, and no one knows. But if we find that we didn't actually need $700 NIC's to get shitty bandwidth, if we could have just been putting cables from PCIe shaped slot to PCIe slot (or oculink port!) and getting >>10x performance with >>10x less latency? Yeah bro uhh I think there might be a desire for using the same fucking chip we already use but getting 10x + 10x better out of it. Faster lower latency cheaper storage? RAM expandability? Lower latency GPU access? There's so much that could make a huge difference for computing, broadly. | | |
| ▲ | justincormack 9 days ago | parent | next [-] | | Thunderbolt tunnels pcie and you can use it as a nic in effect with one cable between devices. Its slower than oculink but more convenient. | |
| ▲ | esseph 8 days ago | parent | prev [-] | | I am very ready for optical bus lfg |
| |
| ▲ | nemomarx 9 days ago | parent | prev [-] | | Probably small consumer market of enthusiasts (notice Nvidia barely caters to gaming hardware lately) but if you can get better memory throughput on servers isn't that a large industry market? |
|
|
| |
| ▲ | Rohansi 9 days ago | parent | prev | next [-] | | You'll want to look at benchmarks rather than the theoretical maximum bandwidth available to the system. Apple has been using bandwidth as a marketing point but you're not always able to use that bandwidth amount depending on your workload. For example, the M1 Max has 400GB/s advertised bandwidth but the CPU and GPU combined cannot utilize all of it [1]. This means Strix Halo could actually be better for LLM inference than Apple Silicon if it achieves better bandwidth utilization. [1] https://web.archive.org/web/20250516041637/https://www.anand... | |
| ▲ | vFunct 9 days ago | parent | prev [-] | | The game changer technology that'll enable full 1TB+ LLM models for cheap is Sandisk's High Bandwidth Flash. Expect devices with that in about 3-4 years, maybe even on cellphones. | | |
| ▲ | jauntywundrkind 9 days ago | parent [-] | | I'm crazy excited for High Bandwidth Flash, really hope they pull it off. There is a huge caveat: only having a couple hundred or thousand r/w cycles before your multi $k accelerator stops working!! A pretty big constraint! But as long as you are happy to keep running the same model, the wins here for large capacity & high bandwidth are sick ! And the affordability could be exceptional! (If you can afford to make flash with a hundred or so channels at a decent price!) |
|
|
|
| ▲ | Uehreka 10 days ago | parent | prev | next [-] |
| I was talking about this in another comment, and I think the big issue at the moment is that a lot of the local models seem to really struggle with tool calling. Like, just straight up can’t do it even though they’re advertised as being able to. Most of the models I’ve tried with Goose (models which say they can do tool calls) will respond to my questions about a codebase with “I don’t have any ability to read files, sorry!” So that’s a real brick wall for a lot of people. It doesn’t matter how smart a local model is if it can’t put that smartness to work because it can’t touch anything. The difference between manually copy/pasting code from LM Studio and having an assistant that can read and respond to errors in log files is light years. So until this situation changes, this asterisk needs to be mentioned every time someone says “You can run coding models on a MacBook!” |
| |
| ▲ | com2kid 9 days ago | parent | next [-] | | > Like, just straight up can’t do it even though they’re advertised as being able to. Most of the models I’ve tried with Goose (models which say they can do tool calls) will respond to my questions about a codebase with “I don’t have any ability to read files, sorry!” I'm working on solving this problem in two steps. The first is a library prefilled-json, that lets small models properly fill out JSON objects. The second is a unpublished library called Ultra Small Tool Call that presents tools in a way that small models can understand, and basically walks the model through filling out the tool call with the help of prefilled-json. It'll combine a number of techniques, including tool call RAG (pulls in tool definitions using RAG) and, honestly, just not throwing entire JSON schemas at the model but instead using context engineering to keep the model focused. IMHO the better solution for local on device workflows would be if someone trained a custom small parameter model that just determined if a tool call was needed and if so which tool. | |
| ▲ | jauntywundrkind 10 days ago | parent | prev | next [-] | | Agreed that this is a huge limit. There's a lot of examples actually of "tool calling" but it's all bespoke code-it-yourself: very few of these systems have MCP integration. I have a ton of respect for SGLang as a runtime. I'm hoping something can be done there. https://github.com/sgl-project/sglang/discussions/4461 . As noted in that thread, it is really great that Qwen3-Coder has a tool-parser built-in: hopefully can be some kind useful reference/start. https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct/b... | |
| ▲ | 10 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | wizee 9 days ago | parent | prev | next [-] | | Qwen 3 Coder 30B-A3B has been pretty good for me with tool calling. | |
| ▲ | mxmlnkn 9 days ago | parent | prev [-] | | This resonates. I have finally started looking into local inference a bit more recently. I have tried Cursor a bit, and whatever it used worked somewhat alright to generate a starting point for a feature and for a large refactor and break through writer's blocks. It was fun to see it behave similarly to my workflow by creating step-by-step plans before doing work, then searching for functions to look for locations and change stuff. I feel like one could learn structured thinking approaches from looking at these agentic AI logs. There were lots of issues with both of these tasks, though, e.g., many missed locations for the refactor and spuriously deleted or indented code, but it was a starting point and somewhat workable with git. The refactoring usage caused me to reach free token limits in only two days. Based on the usage, it used millions of tokens in minutes, only rarely less than 100K tokens per request, and therefore probably needs a similarly large context length for best performance. I wanted to replicate this with VSCodium and Cline or Continue because I want to use it without exfiltrating all my data to megacorps as payment and use it to work on non-open-source projects, and maybe even use it offline. Having Cursor start indexing everything, including possibly private data, in the project folder as soon as it starts, left a bad taste, as useful as it is. But, I quickly ran into context length problems with Cline, and Continue does not seem to work very well. Some models did not work at all, DeepSeek was thinking for hours in loops (default temperature too high, should supposedly be <0.5). And even after getting tool use to work somewhat with qwen qwq 32B Q4, it feels like it does not have a full view of the codebase, even though it has been indexed. For one refactor request mentioning names from the project, it started by doing useless web searches. It might also be a context length issue. But larger contexts really eat up memory. I am also contemplating a new system for local AI, but it is really hard to decide. You have the choice between fast GPU inference, e.g., RTX 5090 if you have money, or 1-2 used RTX 3090, or slow, but qualitatively better CPU / unified memory integrated GPU inference with systems such as the DGX Spark, the Framework Desktop AMD Ryzen AI Max, or the Mac Pro systems. Neither is ideal (and cheap). Although my problems with context length and low-performing agentic models seem to indicate that going for the slower but more helpful models on a large unified memory seems to be better for my use case. My use case would mostly be agentic coding. Code completion does not seem to fit me because I find it distracting, and I don't require much boilerplating. It also feels like the GPU is wasted, and local inference might be a red herring altogether. Looking at how a batch size of 1 is one of the worst cases for GPU computation and how it would only be used in bursts, any cloud solution will be easily an order of magnitude or two more efficient because of these, if I understand this correctly. Maybe local inference will therefore never fully take off, barring even more specialized hardware or hard requirements on privacy, e.g., for companies. To solve that, it would take something like computing on encrypted data, which seems impossible. Then again, if the batch size of 1 is indeed so bad as I think it to be, then maybe simply generate a batch of results in parallel and choose the best of the answers? Maybe this is not a thing because it would increase memory usage even more. | | |
| ▲ | justincormack 9 days ago | parent [-] | | You might end up using batching to run multiple queries or branches for yourself in parallel. But yes as you say it is very unclear right now. |
|
|
|
| ▲ | wizee 9 days ago | parent | prev | next [-] |
| While cloud models are of course faster and smarter, I've been pretty happy running Qwen 3 Coder 30B-A3B on my M4 Max MacBook Pro. It has been a pretty good coding assistant for me with Aider, and it's also great for throwing code at and asking questions. For coding specifically, it feels roughly on par with SOTA models from mid-late 2024. At small contexts with llama.cpp on my M4 Max, I get 90+ tokens/sec generation and 800+ tokens/sec prompt processing. Even at large contexts like 50k tokens, I still get fairly usable speeds (22 tok/s generation). |
|
| ▲ | 1oooqooq 10 days ago | parent | prev [-] |
| more interesting is the extent apple convinced people a laptop can replace a desktop or server. mind blowing reality distortion field (as will be proven by some twenty comments telling I'm wrong 3... 2... 1). |
| |
| ▲ | davidmurdoch 9 days ago | parent | next [-] | | I dropped $4k on an (Intel) laptop a few years ago. I thought it would blow my old 2012 core i7 out of the water. Editing photos in Lightroom and Photoshop often requires heavy sustained CPU work. Thermals in laptops is just not a solved problem. People who say laptops are fine replacements for desktops probably don't realize how much and how quickly thermals limit heavy multi-core CPU workloads. | | |
| ▲ | jki275 9 days ago | parent [-] | | That was true until Apple released the M series laptops. |
| |
| ▲ | bionsystem 10 days ago | parent | prev | next [-] | | I'm a desktop guy, considering the switch to a laptop-only setup, what would I miss ? | | |
| ▲ | kelipso 10 days ago | parent | next [-] | | For $10k, you too can get the power of a $2k desktop, and enjoy burning your lap everyday, or something like that. If I were to do local compute and wanted to use my laptop, I would only consider a setup where I ssh in to my desktop. So I guess only difference from saas llm would be privacy and the cool factor. And rate limits, and paying more if you go over, etc. | | |
| ▲ | com2kid 9 days ago | parent | next [-] | | $2k laptops now days come with 16 cores. They are thermally limited, but they are going to get you 60-80% the perf of their desktop counterparts. The real limit is on the Nvidia cards. They are cut down a fair bit, often with less VRAM until you really go up in price point. They also come with NPUs but the docs are bad and none of the local LLM inference engines seem to use the NPU, even though they could in theory be happy running smaller models. | |
| ▲ | EagnaIonat 9 days ago | parent | prev [-] | | > For $10k, you too can get the power of a $2k desktop, Even M1 MBP 32GB performance is pretty impressive for its age and you can get them for well <$1K second hand. I have one. I use these models: gpt-oss, llama3.2, deepseek, granite3.3 They all work fine and speed is not an issue. The recent Ollama app means I can have document/image processing with the LLM as well. |
| |
| ▲ | moron4hire 9 days ago | parent | prev | next [-] | | You'll end up with a portable desktop with bad thermals, impacting performance, battery life, and actually-on-the-lap comfort. Bleeding-edge performance laptops can really only manage an hour, max, on battery, making the form factor much more about moving between different pre-planned, desk-oriented work locations. I take my laptop back and forth from home to work. At work, I ban them from in-person meetings because I want people to actually pay attention to the meeting. In both locations where I use the computer, I have a monitor, keyboard, and mouse I'm plugging in via a dock. That makes the built-in battery and I/O redundant. I think I would rather have a lower-powered, high-battery, ultra portable laptop remoting into the desktop for the few times I bring my computer to in-person meetings for demos. I wish the memory bandwidth for eGPUs was better. | | |
| ▲ | aldanor 9 days ago | parent [-] | | Huh? Bleeding edge laptops can last a lot more on battery. M3 16'' mbp lasts definitely enough for a full office day of coding. Twice that if just browsing and not doing cpu intensive stuff. | | |
| ▲ | moron4hire 9 days ago | parent [-] | | Even the M4 Max is not "bleeding edge". Apple is doing impressive stuff with energy efficient compute, but you can't get top of the line raw compute for any amount of financial of energy budget from them. | | |
| ▲ | aldanor 9 days ago | parent [-] | | I'm genuinely interested in what kind of work are you doing if bringing m4 max is not enough? And what kind of bleeding edge laptops are we even talking about (link?) and for what purpose? |
|
|
| |
| ▲ | baobun 9 days ago | parent | prev [-] | | Upgradability, repairability, thermals (translating into widely different performance for the same specs), I/O, connectivity. |
| |
| ▲ | jazzypants 9 days ago | parent | prev [-] | | I think this would be more interesting if you were to try to prove yourself correct first. There are extremely few things that I cannot do on my laptop, and I have very little interest in those things. Why should I get a computer that doesn't have a screen? You do realize that, at this point of technological progress, the computer being attached to a keyboard and a screen is the only true distinguishing factor of a laptop, right? | | |
| ▲ | 1oooqooq 6 days ago | parent [-] | | cool. you can browse the web. that's cool. just stay out of conversation you're not an authority. |
|
|