| ▲ | A guide to local coding models(aiforswes.com) |
| 180 points by mpweiher 4 hours ago | 87 comments |
| |
|
| ▲ | simonw 4 hours ago | parent | next [-] |
| > I realized I looked at this more from the angle of a hobbiest paying for these coding tools. Someone doing little side projects—not someone in a production setting. I did this because I see a lot of people signing up for $100/mo or $200/mo coding subscriptions for personal projects when they likely don’t need to. Are people really doing that? If that's you, know that you can get a LONG way on the $20/month plans from OpenAI and Anthropic. The OpenAI one in particular is a great deal, because Codex is charged a whole lot lower than Claude. The time to cough up $100 or $200/month is when you've exhausted your $20/month quota and you are frustrated at getting cut off. At that point you should be able to make a responsible decision by yourself. |
| |
| ▲ | kristopolous 5 minutes ago | parent | next [-] | | I use local models + openrouter free ones. My monthly spend on ai models is < $1 I'm not cheap, just ahead of the curve. With the collapse in inference cost, everything will be this eventually Also I've put in my 30 years of tech learning so I might not need them as much as others. I'll basically do $ man tool | <how do I do this with the tool>
or even $ cat source | <find the flags and give me some documentation on how to use this>
Things I used to do intensively I now do lazily. | |
| ▲ | uneekname an hour ago | parent | prev | next [-] | | Yes, we are doing that. These tools help make my personal projects come to life, and the money is well worth it. I can hit Claude Code limits within an hour, and there's no way I'm giving OpenAI my money. | | |
| ▲ | _delirium 43 minutes ago | parent [-] | | As a third option, I've found I can do a few hours a day on the $20/mo Google plan. I don't think Gemini is quite as good as Claude for my uses, but it's good enough and you get a lot of tokens for your $20. Make sure to enable the Gemini 3 preview in gemini-cli though (not enabled by default). |
| |
| ▲ | wyre 3 hours ago | parent | prev | next [-] | | Me. Currently using Claude Max for personal coding projects. I've been on Claude's $20 plan and would run out of tokens. I don't want to give my money to OpenAI. So far these projects have not returned their value back to me, but I am viewing it as an investment in learning best pratices with these coding tools. | |
| ▲ | mudkipdev an hour ago | parent | prev | next [-] | | Claude's $20 plan should be renamed to "trial". Try Opus and you will reach your limit in 10 minutes. With Sonnet, if you aren't clearing the context very often, you'll hit it within a few hours. I'm sympathetic to developers who are using this as their only AI subscription because while I was working on a challenging bug yesterday I reached the limit before it had even diagnosed the problem and had to switch to another coding agent to take over. I understand you can't expect much from a $20 subscription, but the next jump up costing $80 is demotivating. | | |
| ▲ | kxrm 16 minutes ago | parent | next [-] | | > Try Opus and you will reach your limit in 10 minutes. That hasn't been true with Opus 4.5. I usually hit my limit after an hour of intense sessions. | |
| ▲ | lelele an hour ago | parent | prev | next [-] | | > With Sonnet, if you aren't clearing the context very often, you'll hit it within a few hours. Do you mean that users should start a new chat for every new task, to save tokens? Thanks. | | |
| ▲ | jfreds 34 minutes ago | parent [-] | | Short answer is yes. Not only is it more token-friendly and potentially lower latency, it also prevents weird context issues like forgetting Rules, compacting your conversation and missing relevant details, etc. |
| |
| ▲ | bdangubic 36 minutes ago | parent | prev [-] | | the only thing that matters is whether or not you are getting your money’s worth. nothing else matters. if claude is worth $100 or $200 per month to you, it is an easy decision to pay. otherwise stick with $20 or nothing |
| |
| ▲ | ncruces 25 minutes ago | parent | prev | next [-] | | What I find perplexing is the very respectful people that pay those subscriptions to produce clearly sub-par work I'm sure they wouldn't have done themselves. And when pressed on “this doesn't make sense, are you sure this works?” they ask the model to answer, it gets it wrong, and they leave it at that. | |
| ▲ | bonsai_spool 34 minutes ago | parent | prev | next [-] | | I also pay for the $100 as a research in biology dealing with a fair amount of data analysis in addition to bench work. Incidentally, wondering if anyone has seen this approach of asking Claude to manage Codex: https://www.reddit.com/r/codex/comments/1pbqt0v/using_codex_... | |
| ▲ | hamdingers 3 hours ago | parent | prev | next [-] | | And as a hobbyist the time to sign up for the $20/month plan is after you've spent $20 on tokens at least a couple times. YMMV based on the kinds of side projects you do, but it's definitely been cheaper for me in the long run to pay by token, and the flexibility it offers is great. | | |
| ▲ | iOSThrowAway 3 hours ago | parent [-] | | I spent $240 in one week through the API and realized the $20/month was a no-brainer. |
| |
| ▲ | smcleod 2 hours ago | parent | prev | next [-] | | On a $20/mo plan doing any sort of agentic coding you'll hit the 5hr window limits in less than 20 minutes. | | |
| ▲ | simonw 2 hours ago | parent | next [-] | | With Codex it only happened to me once in my 4.5hr session here: https://simonwillison.net/2025/Dec/15/porting-justhtml/ Claude Code is a whole lot less generous though. | |
| ▲ | andix 2 hours ago | parent | prev [-] | | It really depends. When building a lot of new features it happens quite fast. With some attention to context length I was often able to go for over an hour on the 20$ claude plan. If you're doing mostly smaller changes, you can go all day with the 20$ Claude plan without hitting the limits. Especially if you need to thoroughly review the AI changes for correctness, instead of relying on automated tests. | | |
| ▲ | allenu 2 hours ago | parent [-] | | I find that I use it on isolated changes where Claude doesn’t really need to access a ton of files to figure out what to do and I can easily use it without hitting limits. The only time I hit the 4-5 hour limit is when I’m going nuts on a prototype idea and vibe coding absolutely everything, and usually when I hit the limit, I’m pretty mentally spent anyway so I use it as a sign to go do something else. I suppose everyone has different styles and different codebases, but for me I can pretty easily stay under the limit without that it’s hard to justify $100 or $200 a month. |
|
| |
| ▲ | satvikpendem 2 hours ago | parent | prev | next [-] | | > If that's you, know that you can get a LONG way on the $20/month plans from OpenAI and Anthropic. > The time to cough up $100 or $200/month is when you've exhausted your $20/month quota and you are frustrated at getting cut off. At that point you should be able to make a responsible decision by yourself. These are the same people, by and large. What I have seen is users who purely vibe code everything and run into the limits of the $20/m models and pay up for the more expensive ones. Essentially they're trading learning coding (and time, in some cases, it's not always faster to vibe code than do it yourself) for money. | | |
| ▲ | maddmann 2 hours ago | parent | next [-] | | If this is the new way code is written then they are arguably learning how to code. Jury is still out though, but I think you are being a bit dismissive. | |
| ▲ | cmrdporcupine 44 minutes ago | parent | prev [-] | | I've been a software developer for 25 years, and 30ish years in the industry, and have been programming my whole life. I worked at Google for 10 of those years. I work in C++ and Rust. I know how to write code. I don't pay $100 to "vibe code" and "learn to program" or "avoid learning to program." I pay $100 so I can get my personal (open source) projects done faster and more completely without having to hire people with money I don't have. |
| |
| ▲ | shepherdjerred 23 minutes ago | parent | prev | next [-] | | I pay $200/mo just for Claude Code. I used Cursor for a while and used something like $600 in credits in Nov. | |
| ▲ | __mharrison__ 3 hours ago | parent | prev | next [-] | | I'm convinced the $20 gpt plus plan is the best plan right now. You can use Codex with gpt5.2. I've been very impressed with this. (I also have the same MBP the author has and have used Aider with Qwen locally.) | | |
| ▲ | andix 2 hours ago | parent | next [-] | | From my personal experience it's around 50:50 between Claude and Codex. Some people strongly prefer one over the other. I couldn't figure out yet why. I just can't accept how slow codex is, and that you can't really use it interactively because of that. I prefer to just watch Claude code work and stop it once I don't like the direction it's taking. | | |
| ▲ | asabla 2 hours ago | parent [-] | | From my point of view, you're either choosing between instruction following or more creative solutions. Codex models tend to be extremely good at following instructions, to the point that it won't do any additional work unless you ask it to. GPT-5.1 and GPT-5.2 on the other hand is a little bit more creative. Models from Anthropics on the other hand is a lot more loosy goosy on the instructions, and you need to keep an eye on it much more often. I'm using models interchangeably from both providers all the time depending on the task at hand. No real preference if one is better then the other, they're just specialized on different things |
| |
| ▲ | baq 3 hours ago | parent | prev [-] | | bit the bullet this week and paid for a month of claude and a month of chatgpt plus. claude seems to have much lower token limits, both aggregate and rate-limited and GPT-5.2 isn't a bad model at all. $20 for claude is not enough even for a hobby project (after one day!), openai looks like it might be. | | |
| ▲ | InsideOutSanta 2 hours ago | parent [-] | | I feel like a lot of the criticism the GPT-5.x models receive only applies to specific use cases. I prefer these models over Anthropic's because they are less creative and less likely to take freedoms interpreting my prompts. Sonnet 4.5 is great for vibe coding. You can give it a relatively vague prompt and it will take the initiative to interpret it in a reasonable way. This is good for non-programmers who just want to give the model a vague idea and end up with a working, sensible product. But I usually do not want that, I do not want the model to take liberties and be creative. I want the model to do precisely what I tell it and nothing more. In my experience, te GPT-5.x models are a better fit for that way of working. |
|
| |
| ▲ | haritha-j 2 hours ago | parent | prev | next [-] | | I’ve been using vs code copilot pro for a few months and never really had any issue, once you hit the limit for one model, you generally still have a bunch more models to choose from. Unless I was vibe coding massive amounts of code without looking to testing, it’s hard to imagine I will run out of all the available pro models. | |
| ▲ | minimaxir 2 hours ago | parent | prev | next [-] | | Claude 4.5 Opus on Claude Code's $20 plan is funny because you get about 2-3 prompts on any nontrivial task before you hit the session limit. If I wasn't only using it for side projects I'd have to cough up the $200 out of necessity. | |
| ▲ | cmrdporcupine an hour ago | parent | prev | next [-] | | Codex $20 is a good deal but they have nothing inbetween $20 and $200. The $20 Anthropic plan is only enough to wet my appetite, I can't finish anything. I pay for $100 Anthropic plan, and keep a $20 Codex plan in my back pocket for getting it to do additional review and analysis overtop of what Opus cooks up. And I have a few small $ of misc credits in DeepSeek and Kimi K2 AI services mainly to try them out, and for tasks that aren't as complicated, and for writing my own agent tools. $20 Claude doesn't go very far. | |
| ▲ | jwpapi 2 hours ago | parent | prev [-] | | Not everybody is broke. |
|
|
| ▲ | raw_anon_1111 2 hours ago | parent | prev | next [-] |
| I don’t think I’ve ever read an article where the reason I knew the author was completely wrong about all of their assumptions was that they admitted it themselves and left the bad assumptions in the article. The above paragraph is meant to be a compliment. But justifying it based on keeping his Mac for five years is crazy. At the rate things are moving, coding models are going to get so much better in a year, the gap is going to widen. Also in the case of his father where he is working for a company that must use a self hosted model or any other company that needed it, would a $10K Mac Studio with 512GB RAM be worth it? What about two Mac Studios connected over Thunderbolt using the newly released support in macOS 26? https://news.ycombinator.com/item?id=46248644 |
|
| ▲ | simonw 3 hours ago | parent | prev | next [-] |
| This story talks about MLX and Ollama but doesn't mention LM Studio - https://lmstudio.ai/ LM Studio can run both MLX and GGUF models but does so from an Ollama style (but more full-featured) macOS GUI. They also have a very actively maintained model catalog at https://lmstudio.ai/models |
| |
| ▲ | ZeroCool2u 3 hours ago | parent | next [-] | | LMStudio is so much better than Ollama it's silly it's not more popular. | | |
| ▲ | thehamkercat 3 hours ago | parent [-] | | LMStudio is not open source though, ollama is but people should use llama.cpp instead | | |
| ▲ | smcleod 2 hours ago | parent | next [-] | | I suspect Ollama is at least partly moving away open source as they look to raise capitol, when they released their replacement desktop app they did so as closed source. You're absolutely right that people should be using llama.cpp - not only is it truly open source but it's significantly faster, has better model support, many more features, better maintained and the development community is far more active. | |
| ▲ | nateb2022 an hour ago | parent | prev | next [-] | | > but people should use llama.cpp instead MLX is a lot more performant than Ollama and llama.cpp on Apple Silicon, comparing both peak memory usage + tok/s output. edit: LM Studio benefits from MLX optimizations when running MLX compatible models. | |
| ▲ | behnamoh 2 hours ago | parent | prev [-] | | > LMStudio is not open source though, ollama is and why should that affect usage? it's not like ollama users fork the repo before installing it. | | |
|
| |
| ▲ | midius 3 hours ago | parent | prev | next [-] | | Makes me think it's a sponsored post. | | |
| ▲ | Cadwhisker 3 hours ago | parent [-] | | LMStudio? No, it's the easiest way to run am LLM locally that I've seen to the point where I've stopped looking at other alternatives. It's cross-platform (Win/Mac/Linux), detects the most appropriate GPU in your system and tells you whether the model you want to download will run within it's RAM footprint. It lets you set up a local server that you can access through API calls as if you were remotely connected to an online service. | | |
| ▲ | vunderba 3 hours ago | parent [-] | | FWIW, Ollama already does most of this: - Cross-platform - Sets up a local API server The tradeoff is a somewhat higher learning curve, since you need to manually browse the model library and choose the model/quantization that best fit your workflow and hardware. OTOH, it's also open-source unlike LMStudio which is proprietary. | | |
| ▲ | randallsquared 2 hours ago | parent [-] | | I assumed from the name that it only ran llama-derived models, rather than whatever is available at huggingface. Is that not the case? | | |
|
|
| |
| ▲ | evacchi 2 hours ago | parent | prev | next [-] | | ramalama.ai is worth mentioning too | |
| ▲ | thehamkercat 3 hours ago | parent | prev [-] | | I think you should mention that LM Studio isn't open source. I mean, what's the point of using local models if you can't trust the app itself? | | |
| ▲ | behnamoh 2 hours ago | parent | next [-] | | > I mean, what's the point of using local models if you can't trust the app itself? and you think ollama doesn't do telemetry/etc. just because it's open source? | | | |
| ▲ | satvikpendem 2 hours ago | parent | prev [-] | | Depends what people use them for, not every user of local models is doing so for privacy, some just don't like paying for online models. | | |
| ▲ | thehamkercat 2 hours ago | parent [-] | | Most LLM sites are now offering free plans, and they are usually better than what you can run locally, So I think people are running local models for privacy 99% of the time |
|
|
|
|
| ▲ | Workaccount2 3 hours ago | parent | prev | next [-] |
| I'm curious what the mental calculus was that a $5k laptop would competitively benchmark against SOTA models for the next 5 years was. Somewhat comically, the author seems to have made it about 2 days. Out of 1,825. I think the real story is the folly of fixating your eyes on shiny new hardware and searching for justifications. I'm too ashamed to admit how many times I've done that dance... Local models are purely for fun, hobby, and extreme privacy paranoia. If you really want privacy beyond a ToS guarantee, just lease a server (I know they can still be spying on that, but it's a threshold.) |
| |
| ▲ | ekjhgkejhgk 3 hours ago | parent | next [-] | | I agree with everything you said, and yet I cannot help but respect a person who wants to do it himself. It reminds me of the hacker culture of the 80s and 90s. | | |
| ▲ | slicktux 2 hours ago | parent [-] | | Agreed,
Everyone seems to shun the DIY hacker now a days; saying things like “I’ll just pay for it”.
It’s not about just NOT paying for it but doing it yourself and learning how to do it so that you can pass the knowledge on and someone else can do it. | | |
| ▲ | davidw 2 hours ago | parent [-] | | I loathe the idea of being beholden to large corporations for what may be a key part of this job in the future. |
|
| |
| ▲ | satvikpendem 2 hours ago | parent | prev | next [-] | | > I'm curious what the mental calculus was that a $5k laptop would competitively benchmark against SOTA models for the next 5 years was. Well, the hardware remains the same but local models get better and more efficient, so I don't think there is much difference between paying 5k for online models over 5 years vs getting a laptop (and well, you'll need a laptop anyway, so why not just get a good enough one to run local models in the first place?). | | |
| ▲ | brulard an hour ago | parent [-] | | If you have inference running on this new 128GB RAM Mac, wouldn't you still need another separate machine to do the manual work (like running IDE, browsers, toolchains, builders/bundlers etc.)? I can not imagine you will have any meaningful RAM available after LLM models are running. |
| |
| ▲ | smcleod 2 hours ago | parent | prev [-] | | My 2023 Macbook Pro (M2 Max) is coming up to 3 years old and I can run models locally that are arguably "better" than what was considered SOTA about 1.5 years ago. This is of course not an exact comparison but it's close enough to give some perspective. |
|
|
| ▲ | NelsonMinar 3 hours ago | parent | prev | next [-] |
| "This particular [80B] model is what I’m using with 128GB of RAM". The author then goes on to breezily suggest you try the 4B model instead of you only have 8GB of RAM. With no discussion of exactly what a hit in quality you'll be taking doing that. |
|
| ▲ | cloudhead 4 hours ago | parent | prev | next [-] |
| In my experience the latest models (Opus 4.5, GPT 5.2) Are _just_ starting to keep up with the problems I'm throwing at them, and I really wish they did a better job, so I think we're still 1-2 years away from local models not wasting developer time outside of CRUD web apps. |
| |
| ▲ | OptionOfT 3 hours ago | parent [-] | | Eh, these things are trained on existing data. The further you are from that the worse the models get. I've noticed that I need to be a lot more specific in those cases, up to the point where being more specific is slowing me down, partially because I don't always know what the right thing is. | | |
| ▲ | cloudhead 2 hours ago | parent [-] | | For sure, and I guess that's kind of my point -- if the OP says local coding models are now good enough, then it's probably because he's using things that are towards the middle of the distribution. |
|
|
|
| ▲ | andix 2 hours ago | parent | prev | next [-] |
| I wouldn't run local models on the development PC. Instead run them on a box in another room or another location. Less fan noise and it won't influence the performance of the pc you're working on. Latency is not an issue at all for LLMs, even a few hundred ms won't matter. It doesn't make a lot of sense to me, except when working offline while traveling. |
| |
| ▲ | snoman 2 hours ago | parent [-] | | Less of a concern these days with hardware like a Mac Studio or Nvidia dgx which are accessible and aren’t noisy at all. |
|
|
| ▲ | maranas 3 hours ago | parent | prev | next [-] |
| Cline + RooCode and VSCode already works really well with local models like qwen3-coder or even the latest gpt-oss. It is not as plug-and-play as Claude but it gets you to a point where you only have to do the last 5% of the work |
| |
| ▲ | rynn an hour ago | parent [-] | | What are you working on that you’ve had such great success with gpt-oss? I didn’t try it long because I got frustrated waiting for it to spit out wrong answers. But I’m open to trying again. | | |
| ▲ | maranas 37 minutes ago | parent [-] | | I use it to build some side-projects, mostly apps for mobile devices. It is really good with Swift for some reason. I also use it to start off MVP projects that involve both frontend and API development but you have to be super verbose, unlike when using Claude. The context window is also small, so you need to know how to break it up in parts that you can put together on your own |
|
|
|
| ▲ | SamDc73 2 hours ago | parent | prev | next [-] |
| If privacy is your top priority, then sure spend a few grand on hardware and run everything locally. Personally, I run a few local models (around 30B params is the ceiling on my hardware at 8k context), and I still keep a $200 ChatGPT subscription cause I'm not spending $5-6k just to run models like K2 or GLM-4.6 (they’re usable, but clearly behind OpenAI, Claude, or Gemini for my workflow) I was got excited about aescoder-4b (model that specialize in web design only) after its DesignArena benchmarks, but it falls apart on large codebases and is mediocre at Tailwind That said, I think there’s real potential in small, highly specialized models like 4B model trained only for FastAPI, Tailwind or a single framework. Until that actually exists and works well, I’m sticking with remote services. |
| |
|
| ▲ | nzeid 4 hours ago | parent | prev | next [-] |
| I appreciate the author's modesty but the flip-flopping was a little confusing. If I'm not mistaken, the conclusion is that by "self-hosting" you save money in all cases, but you cripple performance in scenarios where you need to squeeze out the kind of quality that requires hardware that's impractical to cobble together at home or within a laptop. I am still toying with the notion of assembling an LLM tower with a few old GPUs but I don't use LLMs enough at the moment to justify it. |
| |
| ▲ | a_victorp 3 hours ago | parent [-] | | If you ever do it, please make a guide! I've been toying with the same notion myself | | |
| ▲ | suprjami 3 hours ago | parent | next [-] | | If you want to do it cheap, get a desktop motherboard with two PCIe slots and two GPUs. Cheap tier is dual 3060 12G. Runs 24B Q6 and 32B Q4 at 16 tok/sec. The limitation is VRAM for large context. 1000 lines of code is ~20k tokens. 32k tokens is is ~10G VRAM. Expensive tier is dual 3090 or 4090 or 5090. You'd be able to run 32B Q8 with large context, or a 70B Q6. For software, llama.cpp and llama-swap. GGUF models from HuggingFace. It just works. If you need more than that, you're into enterprise hardware with 4+ PCIe slots which costs as much as a car and the power consumption of a small country. You're better to just pay for Claude Code. | | |
| ▲ | le-mark an hour ago | parent [-] | | I was going to post snark such as “you could use the same hardware to also lose money mining crypto” then realized there are a lot of crypto miners out their that could probably make more money running tokens then they do on crypto. Does such a market place exist? | | |
| |
| ▲ | satvikpendem 2 hours ago | parent | prev [-] | | Jeff Geerling has (not quite but sort of) guides: https://news.ycombinator.com/item?id=46338016 |
|
|
|
| ▲ | ineedasername an hour ago | parent | prev | next [-] |
| I’ve been using Qwen3 Coder 30b quantized down to IQ3_XSS to fit in < 16gb vram. Blazing fast 200+ tokens per second on a 4080. I don’t ask anything complicated, but one off scripts to do something I’d normally have to do manually by hand or take an hour to write the script myself? Absolutely. These are no more than a few dozen lines I can easily eyeball and verify with confidence- that’s done in under 60 seconds and leaves Claude code with plenty of quota for significant tasks. |
|
| ▲ | threethirtytwo an hour ago | parent | prev | next [-] |
| I hope hardware becomes so cheap local models become the standard. |
| |
| ▲ | rynn an hour ago | parent [-] | | It will be like the rest of computing, some things will move to the edge and others stay on the cloud. Best choice will depend on use cases. |
|
|
| ▲ | chrisischris 2 hours ago | parent | prev | next [-] |
| The resource contention point is real, running local models alongside Docker and your actual dev environment is where this can fall apart. One comment here mentions running models on a separate box to avoid impacting your dev machine. That's the right idea but idle GPUs are everywhere the infrastructure to actually tap into them is what's missing. Currently building something along these lines. https://sporeintel.com/ |
| |
|
| ▲ | ardme 2 hours ago | parent | prev | next [-] |
| Isnt the math of buying Nvidia stock with what you pay for all the hardware and then just paying $20 a month for codex with the annual returns better? |
| |
|
| ▲ | jollymonATX 23 minutes ago | parent | prev | next [-] |
| This is not really a guide to local coding models which is kinda disappointing. Would have been interested in a review of all the cutting edge open weight models in various applications. |
|
| ▲ | Bukhmanizer an hour ago | parent | prev | next [-] |
| Are people really so naive to think that the price/quality of proprietary models is going to stay the same forever? I would guess sometime in the next 2-3 years all of the major AI companies are going to increase the price/enshittify their models to the point where running local models is really going to be worth it. |
|
| ▲ | freeone3000 3 hours ago | parent | prev | next [-] |
| What are you doing with these models that you’re going above free tier on copilot? |
| |
| ▲ | satvikpendem 2 hours ago | parent [-] | | Some just like privacy and working without internet, I for example travel regularly on the train and like to have my laptop when there's not always good WiFi. |
|
|
| ▲ | BoredPositron an hour ago | parent | prev | next [-] |
| Not worth it yet. I run a 6000 black for image and video generation, but local coding models just aren't on the same level as the closed ones. I grabbed Gemini for $10/month during Black Friday, GPT for $15, and Claude for $20. Comes out to $45 total, and I never hit the limits since I toggle between the different models. Plus it has the benefit of not dumping too much money into one provider or hyper focusing on one model. That said, as soon as an open weight model gets to the level of the closed ones we have now, I'll switch to local inference in a heartbeat. |
|
| ▲ | holyknight 3 hours ago | parent | prev [-] |
| your premise would've been right, if memory wouldn't skyrocketed like 400% in like 2 weeks. |