| ▲ | reisse 9 hours ago |
| > They will be, and that moment is not that far off. It's here, right now. I'm running quantized Qwen and Gemma on a decent, but three years old gaming rig (think RTX 3080 12GB and 32 GB RAM). Yes, it's slow, it has a small context window. But it can (given a proper harness) run through my trip photos and categorize them. It can OCR receipts and summarize spendings. It can answer simple questions, analyze code and even write code when little context is required. Probably I could get a half-decent autocomplete out of it, if I bother with VS Code integration. "128 GB VRAM on a MacBook Pro or a Strix Halo" is already a minimum viable setup for agentic coding, I think. > And then we'll have the equilibrium we already have with the "classic cloud": you either self-host or pay for flexibility and speed. Currently, it works exactly the other way. The cloud versions are orders of magnitude cheaper than self hosting, because sharing can utilize servers much more efficiently. Company can spend half a million bucks on a rig running GLM 5.1, and get data security, flexibility and lack of censorship, but oh it's so expensive compared to Anthropic per-seat plans. |
|
| ▲ | digitaltrees 7 hours ago | parent | next [-] |
| I built my own IDE and run my own model specifically to have private agentic coding. I can still access model APIs but I can be purely local if I want too. It’s amazing. |
|
| ▲ | datadrivenangel 8 hours ago | parent | prev | next [-] |
| In my experience once you get to ~30 gigs of ram for a model like Gemma4, the rest of the 128g of memory is simply nice to have. The speed and costs are what make it tough though, because its slower and more expensive than the same model served on a big accelerator card, and is going to be worse than a frontier model. |
| |
| ▲ | digitaltrees 7 hours ago | parent [-] | | I wonder if it really needs to be worse. I am playing with the idea of fine tuning a model on my exact stack and coding patterns. I suspect I could get better performance by training “taste” into a model rather than breadth. | | |
| ▲ | andy_ppp 3 hours ago | parent | next [-] | | Fine tuning these models (at least with PPO or equivalent) requires even more VRAM than inference does, potentially 2-3 times more. | |
| ▲ | epicureanideal 3 hours ago | parent | prev [-] | | I also wonder about JS only, Python only, etc models. Maybe the future is a selection of local, specific stack trained models? | | |
| ▲ | andy_ppp 3 hours ago | parent [-] | | These models being able to generalise at coding will likely get worse if you remove high quality training data like all of python. |
|
|
|
|
| ▲ | DrewADesign 4 hours ago | parent | prev | next [-] |
| Multiple gazillion dollar companies each seem to be spending to ensure that they alone pretty much dominate all knowledge work, with customers eating up their tokens like Cookie Monster. I wonder if the any of them could survive as LLM providers if they not only failed to do that, but the entire industry ended up selling what the current Cookie Monster would call a “sometimes snack,” for very special occasions? |
|
| ▲ | winocm 7 hours ago | parent | prev | next [-] |
| Perhaps I am the odd one out here, but a small part of me wants to see what happens when you run a proprietary SOTA model on a laptop. |
| |
| ▲ | amelius 2 minutes ago | parent | next [-] | | You burn your lap? | |
| ▲ | pianopatrick 3 hours ago | parent | prev | next [-] | | Currently I'm testing something like this just to see what happens. I have an old laptop with 4GB of RAM. I attached a USB drive with Gemma 4 31B model (which is 32.6 GB). Currently the laptop is running llama.cpp and trying to respond to a prompt by streaming the model from disk. The USB drive light is flickering, showing something is happening. It's been about 8 hours since I entered the prompt and I've gotten about 10 tokens back so far. I'm going to leave it running overnight and see what happens. | |
| ▲ | reisse 7 hours ago | parent | prev | next [-] | | Nothing special? I mean, inference engine might need to get some tweaks, to support whatever compute is available. But then, if you put a few terabytes of disk for swap, and replace RAM to bigger sticks if possible, it should work? Slowly, of course, but there is no reason it should not to. | | |
| ▲ | reverius42 6 hours ago | parent [-] | | The big difference will be measuring seconds per token instead of tokens per second. | | |
| |
| ▲ | yfw 7 hours ago | parent | prev | next [-] | | You can if you have enough ram slots? | |
| ▲ | SilentM68 3 hours ago | parent | prev [-] | | Not sure if this is exactly the scenario you envision but I run ComfyUI on an Acer Helio 300 laptop, from four years ago. Has 16GB RAM, NVIDIA GeForce RTX 2060 w/6144MiB of VRAM and have generated a few images using "NetaYumev35_pretrained_all_in_one.safetensors" @ 10.6GB checkpoint, (well beyond the 6GB capacity of the RTX 2060 card). That being said, it takes more than 10 minutes to complete the task. Of course, I have to turn off all other apps, and browser tabs or hibernate them. If I don't, the laptop's fans begin to spin up like an airplane propeller. It's worth mentioning that I've tried to do this with other IDEs and all seem to fail with some error or another, usually out of VRAM issue. I've only gotten it to work with ComfyUI. I use an anaconda environment, though would have preferred an "uv" environment, on Linux and automate the startup sequence using the following script (start_comfy.sh) from the term rather than manually starting the environment from same said term: #!/bin/bash # # temporary shell version eval "$(conda shell.bash hook)" conda activate comfy-env comfy launch -- --lowvram --cpu-vae Here are some of the images:
https://imgbox.com/nqjYhdx3
https://imgbox.com/93vSWFic
https://imgbox.com/qs1898dz I'm hesitant to increase the sizes of the renders as that will surely stress my laptop's components. | | |
| ▲ | t_mahmood an hour ago | parent [-] | | I'm not running local for exactly the same reason, to not stress my components. As it seems we are in for a long haul due to this AI bubble (can't wait for it to pop) so need to make sure I survive this madness, as for sure I can't afford to replace anything right now. |
|
|
|
| ▲ | antidamage 6 hours ago | parent | prev | next [-] |
| This is my exact setup as well and dear lord gemma is absolutely batshit insane. I'm trying to get a self-reflection and confidence loop going now, but it does feel like it's not the local resources, it's the limits of the training. Dedicated coding or dedicated real-world task models would be a good optimisation. |
|
| ▲ | yieldcrv 5 hours ago | parent | prev [-] |
| I need to see these proper harnesses I tried oMLX and OpenCode a few weeks ago and the 65k context window was useless, it tried to analyze a very small codebase before going full on agentic and ran out of context window immediately I don't have time to tweak 1,000 permutations of settings just re-prove that its not as smart as Opus 4.6 I need out the box multimodal behavior as similar as typing claude in the command line and its so not there yet but I'm open to seeing what people's workflows are |
| |
| ▲ | phamilton 4 hours ago | parent | next [-] | | I'm running opencode with qwen3.6-35b-a3b at a 3-bit quant. I also have qwen3.5-0.8b used for context compaction. I run with 128k context. It's usable. I set it loose on the postgres codebase, told it to find or build a performance benchmark for the bloom filter index and then identify a performance improvement. It took a long time (overnight), but eventually presented an alternate hashing algorithm with experimental data on false positive rate, insertion speed and lookup speed. There wasn't a clear winner, but it was a reasonable find with rigorous data. | | |
| ▲ | Balinares an hour ago | parent [-] | | Do you encounter looping issues at such low quants? How do you deal with those? |
| |
| ▲ | cyberax 15 minutes ago | parent | prev | next [-] | | I'm playing with a tape drive for backups, so I asked a local model to rewrite LTFS ( https://github.com/LinearTapeFileSystem/ltfs ) in Go. I gave it the reference C implementation, the LTFS spec from SNIA, and asked it to use the C implementation to verify the correctness of the Go code. LTFS is a pretty straightforward spec, so it made a very reasonable port within about 2 days. It's now working on implementing the iSCSI initiator (client) to speak with my tape drive directly, without involving the kernel. Edit: the model is Qwen3.6-35B | |
| ▲ | nullsanity 5 hours ago | parent | prev [-] | | Hey man, you can just say "I'm lazy, so I'm staying with the cloud. if I wanted to use my brain, I wouldn't be using AI, gosh" - it's much shorter. |
|