| ▲ | Buy a Faster CPU(blog.howardjohn.info) |
| 54 points by ingve 6 hours ago | 78 comments |
| |
|
| ▲ | avidiax 3 hours ago | parent | next [-] |
| Employers, even the rich FANG types, are quite penny-wise and pound-foolish when it comes to developer hardware. Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation. To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer. |
| |
| ▲ | createaccount99 2 hours ago | parent | next [-] | | Isn't it about equal treatment? You can't buy one person everything they want, just because they have high salary, otherwise the employee next door will get salty. | |
| ▲ | tgma 3 hours ago | parent | prev [-] | | FANG is not monolithic. Amazon is famously cheap. So is Apple in my opinion based on what I have heard (you get random refurbished hardware that is available not some standardized thing, sometimes with 8GB RAM sometimes something nicer) Apple is also famously cheap on their compensation. Back in the day they proudly said shit to the effect of "we deliberately don't pay you top of the market because you have to love Apple" to which the only valid answer is "go fuck yourself." Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers. Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need. Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy. | | |
| ▲ | fmajid 2 hours ago | parent | next [-] | | iOS development is still mostly local which is why most of the iOS developers at my previous Big Tech employer got Mac Studios as compiler machines in addition to their MacBook Pros. This requires director approval but is a formality. I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale. | |
| ▲ | PartiallyTyped 3 hours ago | parent | prev | next [-] | | Not sure what you are talking about re amzn. I have a pretty high end MacBook Pro, and that pales in comparison to the compute I have access to. | | | |
| ▲ | laidoffamazon 3 hours ago | parent | prev [-] | | How do you know someone worked at Google? Don’t worry, they’ll tell you |
|
|
|
| ▲ | userbinator 2 hours ago | parent | prev | next [-] |
| I wish developers, and I'm saying this as one myself, were forced to work on a much slower machine, to flush out those who can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade. |
| |
| ▲ | avidiax 2 hours ago | parent | next [-] | | Yeah, I recognize this all too well. There is an implicit assumption that all hardware is top-tier, all phones are flagships, all mobile internet is 5G, everyone has regular access to free WiFi, etc. Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week. Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms. | | |
| ▲ | jacobgorm 2 hours ago | parent [-] | | If they get the latest hardware to build on the build itself will become slow too. |
| |
| ▲ | zh3 2 hours ago | parent | prev | next [-] | | Develop on a fast machine, test and optimise on a slow one? | |
| ▲ | djmips 2 hours ago | parent | prev | next [-] | | They shouldn't work on a slower machine - however they should test on a slower machine. Always. | |
| ▲ | mft_ 2 hours ago | parent | prev [-] | | I came here to say exactly this. If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling. And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too. |
|
|
| ▲ | diminish 2 hours ago | parent | prev | next [-] |
| Multi-core operations like compiling C/C++ could benefit. Single thread performance of 16-core AMD Ryzen 9 9950X is only 1.8x of my poor and old laptop's 4-core i5 performance. https://www.cpubenchmark.net/compare/6211vs3830vs3947/AMD-Ry... I'm waiting for >1024 core ARM desktops, with >1TB of unified gpu memory to be able to run some large LLMs with Ping me when some builds this :) |
| |
| ▲ | zh3 2 hours ago | parent [-] | | Yes, just went from i3770 (12 years old!) to a 9900x as I tend to wait for a doubling of single core performance before upgrading (got through a lot of PCs in the 386/486 era!). It's actually only 50% faster according to cpubenchmark [0] but is twice as fast in local usage (multithread is reported about 3 times faster). Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance. [0] https://www.cpubenchmark.net/compare/Intel-i9-9900K-vs-Intel... | | |
| ▲ | fmajid an hour ago | parent [-] | | M4 is amazing hardware held up by a sub-par OS. One of the biggest bottlenecks when compiling software on a Mac is notarization, where every executable you compile causes a HTTP call to Apple. In addition to being a privacy nightmare, this causes the configure step in autoconf based packages to be excruciatingly slow. | | |
|
|
|
| ▲ | jhanschoo 3 hours ago | parent | prev | next [-] |
| Important caveat that the author neglects to mention since they are discussing laptop CPUs in the same breath: The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions. |
|
| ▲ | 2shortplanks 3 hours ago | parent | prev | next [-] |
| This article skips a few important steps - how a faster CPU will have a demonstrable improvement on developer performance. I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow. The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want |
| |
| ▲ | mordae 3 hours ago | parent | next [-] | | IO bound compiler would be weird. Memory, perhaps, but newer CPUs also tend to be able to communicate with RAM faster, so... I think just having LSP give you answers 2x faster would be great for staying in flow. | | |
| ▲ | crinkly 3 hours ago | parent [-] | | Compiler is usually IO bound on windows due to NTFS and the small files in MFT and lock contention problem. If you put everything on a ReFS volume it goes a lot faster. Applies to git operations as well. |
| |
| ▲ | yoz-y 3 hours ago | parent | prev | next [-] | | I don’t think that we live in an era where a hardware update can bring you down to 3s from 30s, unless the employer really cheaped out on the initial buy. Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers” | |
| ▲ | delusional 3 hours ago | parent | prev [-] | | I wish I was compiler bound. Nowadays, with everything being in the cloud or whatever I'm more likely to be waiting for Microsoft's MFA (forcing me to pick up my phone, the portal to distractions) or getting some time limited permission from PIM. The days when 30 seconds pauses for the compiler was the slowest part are long over. |
|
|
| ▲ | kaspar030 3 hours ago | parent | prev | next [-] |
| > Top end CPUs are about 3x faster than the comparable top end models 3 years ago I wish that were true, but the current Ryzen 9950 is maybe 50% faster than the two generations older 5950, at compilation workloads. |
| |
| ▲ | tgma 3 hours ago | parent [-] | | Not even. Probably closer to 30%, and that's if you are doing actual many-core compile workloads on your critical path. |
|
|
| ▲ | poink 3 hours ago | parent | prev | next [-] |
| I generally agree you should buy fast machines, but the difference between my 5950x (bought in mid 2021. I checked) and the latest 9950x is not particularly large on synthetic benchmarks, and the real world difference for a software developer who is often IO bound in their workflow is going to be negligible If you have a bad machine get a good machine, but you’re not going to get a significant uplift going from a good machine that’s a few years old to the latest shiny |
|
| ▲ | blueflow 2 hours ago | parent | prev | next [-] |
| Dunno. I got a Ryzen 7 with 16 cores from 2021 and the modern web still doesn't render smoothly. Maybe its not the hardware? |
| |
| ▲ | ofalkaed 2 hours ago | parent | next [-] | | Right now I am on my ancient cheap laptop with some 4 core intel and hard drive noises, the only time it has issues with webpages is when I have too many tabs open for its 4gigs of ram. My current laptop which is a 16 core Rhyzen 7 from about 2021 (x13) has never had an issue and I have yet to have too many tabs open on it. I think you might be having a OS/browser issue. As an aside, being on my old laptop with its hard drive, can't believe how slow life was before SSDs. I am enjoying listening to the hard drive work away and I am surprised to realize that I missed it. | |
| ▲ | mft_ 2 hours ago | parent | prev [-] | | As an alternative anecdote, I've got a Ryzen 7 5800X from 2021, and it's stil blazingly fast for just about everything I throw at it... |
|
|
| ▲ | defanor 3 hours ago | parent | prev | next [-] |
| This compares a new desktop CPU to older laptop ones. There are much more complete benchmarks on more specialized websites [0, 1]. > If you can justify an AI coding subscription, you can justify buying the best tool for the job. I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU. Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer. [0] https://www.cpubenchmark.net/ [1] https://www.tomshardware.com/pc-components/cpus |
|
| ▲ | jgb1984 2 hours ago | parent | prev | next [-] |
| Specifically: buy a good desktop computer.
I couldn't imagine working on a laptop several hours per day (even with an external screen + keyboard + mouse you're still stuck with subpar performance). |
|
| ▲ | Apreche 2 hours ago | parent | prev | next [-] |
| Too bad it’s so hard to get a completely local dev environment these days. It hardly matter what CPU I have since all the intensive stuff happens on another computer. |
|
| ▲ | fxtentacle 3 hours ago | parent | prev | next [-] |
| I wish I could. But most software nowadays is still limited by single core speed and that area hasn’t seen relevant growth in years. „Public whipping for companies who don’t parallelize their code base“ would probably help more. ;) Anyway, how many seconds does MS Teams need to boot on a top of the line CPU? |
| |
| ▲ | amarcheschi 2 hours ago | parent [-] | | I'm forced to use teams and SharePoint in my university as a student and I hate every single interaction with it, I wish curse upon their creators, and may their descendants never have a smooth user experience with any software they use. Except for the ridiculous laggy interface, it has some functional bugs as well such as things just disappearing for a few days and then they pop up again |
|
|
| ▲ | DrNosferatu 3 hours ago | parent | prev | next [-] |
| But single core performance has been stagnant for ages! Considering ‘Geekbench 6’ scores, at least. So if it’s not a task massively benefiting from parallelization, buying used is still the best value for money. |
| |
| ▲ | bob1029 2 hours ago | parent | next [-] | | Single core performance has not been stagnant. We're about double where we were in 2015 for a range of workloads. Branch prediction, OoO execution, SIMD, etc. make a huge difference. The clock speed of a core is important and we are hitting physical limits there, but we're also getting more done with each clock cycle than ever before. | |
| ▲ | DrNosferatu 2 hours ago | parent | prev | next [-] | | I certainly will not die on this hill: my comment was motivated by recently comparing single core scores on Geekbench6 from 10 years apart CPUs. Care to provide some data? | |
| ▲ | TiredOfLife 2 hours ago | parent | prev | next [-] | | Single core performance has tripled in the last 10 years | |
| ▲ | PartiallyTyped 2 hours ago | parent | prev [-] | | I don’t think that’s true. AMD’s ****X3D chips are evidence that’s not true, with lots of benchmarks supporting this. |
|
|
| ▲ | gnfargbl 2 hours ago | parent | prev | next [-] |
| OK, I'm convinced. Can someone tell me what to buy, specifically? Needs to run Ubuntu, support 2 x 4K monitors (3 would be nice), have at least 64GB RAM and fit on my desk. Don't particularly care how good the GPU is / is not. Here's my starting point: gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc. Anything better? |
| |
| ▲ | fmajid an hour ago | parent | next [-] | | Beelink GTR9 Pro. It has dual 10G Ethernet interfaces. And get the 128GB RAM version, the RAM is not upgradeable. It isn't quite shipping yet, though. The absolute best would be a 9005 series Threadripper, but you will easily be pushing $10K+. The mainstream champ is the 9950X but despite being technically a mobile SOC the 395 gets you 90% of the real world performance of a 9950X in a much smaller and power efficient computer: https://www.phoronix.com/review/amd-ryzen-ai-max-arrow-lake/... | |
| ▲ | mft_ 2 hours ago | parent | prev | next [-] | | Huh, that's a really good deal at 1500 USD for the 64Gb model considering the processor it's running. (It's the same one that's in the Framework desktop that there's been lots of noise about recently - lots of recent reviews on YouTube.) Get the 128Gb model for (currently) 1999 USD and you can play with running big local LLMs too. The 8060 iGPU is roughly equivalment to a mid-level nVidia laptop GPU, so it's plenty to deal with a normal workload, and some decent gaming or equivalent if needed. | | | |
| ▲ | zh3 2 hours ago | parent | prev [-] | | Multimonitor with 4K tends to need fast GPU just for the bandwidth, else dragging large windows around can feel quite slow (found that out running 3 x 4K monitors on a low-end GPU). |
|
|
| ▲ | furkansahin 2 hours ago | parent | prev | next [-] |
| FWIW, my recent hn submission had a really good discussion on this very same topic. https://news.ycombinator.com/item?id=44985323 |
|
| ▲ | JSR_FDED 3 hours ago | parent | prev | next [-] |
| > Desktop CPUs are about 3x faster than laptop CPUs Maybe that’s an AMD (or even Intel) thing, but doesn’t hold for Apple silicon. I wonder if it holds for ARM in general? |
| |
| ▲ | wqaatwt 3 hours ago | parent | next [-] | | Apple doesn’t really make desktop CPUs, though. Just very good oversized mobile ones. For AMD/Intel laptop, desktop and server CPUs usually are based on different architectures and don’t have that much overlap. | |
| ▲ | Sayrus 3 hours ago | parent | prev [-] | | The author is talking about multi-core performance rather than single core. Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers. Ampere offers chips than are an order of magnitude faster in multi-core but they are not exactly "desktop" chips. But they are a good data point to say it can be true for ARM if the offer is here. | | |
| ▲ | gloxkiqcza 3 hours ago | parent | next [-] | | > Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers. * Apple: 32 cores (M3 Ultra) * AMD: 96 cores (Threadripper PRO 9995WX) * Intel: 60 cores (W‑9 3595X) I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine. | |
| ▲ | 3 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | ezoe 2 hours ago | parent | prev | next [-] |
| But, does your work constantly compile Linux kernel or encoding AES-256 more than 33GB/s? |
|
| ▲ | einpoklum 3 hours ago | parent | prev | next [-] |
| This is quite the silly argument. * "people" generally don't spend their time compiling the Linux kernel, or anything of the sort. * For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so. * Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks. * If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop. |
| |
| ▲ | ralferoo 3 hours ago | parent [-] | | Just because it doesn't match your situation, doesn't make it a silly argument. Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project. This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe. I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks. The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup. The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow. | | |
|
|
| ▲ | Rucadi 3 hours ago | parent | prev | next [-] |
| I've been struggling with this topic a lot, I feel the slowness everyday and productivity loss of having slow computers, 30m for something that could take 10 times less... it's horrible. |
|
| ▲ | DrNosferatu 3 hours ago | parent | prev | next [-] |
| You can actually do a lot with a non-congested build server. But I would never say no to a faster CPU! |
|
| ▲ | derelicta 3 hours ago | parent | prev | next [-] |
| I wonder what triggered this massive gains in term of CPU Perfs? Any major innovation I might have missed? |
|
| ▲ | mgaunard 3 hours ago | parent | prev | next [-] |
| I've seen more and more companies embrace cloud workstations. It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation. Then your actual physical computer is just a dumb terminal. |
| |
| ▲ | 3 hours ago | parent | next [-] | | [deleted] | |
| ▲ | hulitu 3 hours ago | parent | prev | next [-] | | > I've seen more and more companies embrace cloud workstations. In which movie ? "Microsoft fried movie" ?
Cloud sucks big time. Not all engineers are web developers. | | |
| ▲ | fmajid an hour ago | parent | next [-] | | With tools like Blaze/Bazel (Google) or Buck2 (Meta) compilations are performed on a massive parallel server farm and the hermetic nature of the builds ensures there are no undocumented dependencies to bite you. These are used for nearly everything at Big Tech, not just webdev. | |
| ▲ | mgaunard 3 hours ago | parent | prev [-] | | It's for example being rolled out at my current employer, which is one of the biggest electronic trading companies in the world, mostly C++ software engineers, and research in Python.
While many people still run their IDE on the dumb terminal (VSCode has pretty good SSH integration), people that use vim or the like work fully remotely through ssh. I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection. |
| |
| ▲ | milesrout 3 hours ago | parent | prev [-] | | Great, now every operation has 300ms of latency. Kill me | | |
| ▲ | mgaunard 3 hours ago | parent | next [-] | | All of the big clouds have regions throughout the world so you should be able to find one less than 100ms away fairly easily. Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead. | |
| ▲ | TiredOfLife 2 hours ago | parent | prev [-] | | I wonder if everyone on HN has just woken from a 20 year coma. |
|
|
|
| ▲ | bsder 3 hours ago | parent | prev | next [-] |
| Or, perhaps, make it easier to run your stuff on a big machine over -> there. It doesn't have to be the cloud, but having a couple of ginormous machines in a rack where the fans can run at jet engine levels seems like a no-brainer. |
|
| ▲ | hulitu 3 hours ago | parent | prev | next [-] |
| > the top end CPU, AMD Ryzen 9 9950X This is an "office" CPU. Workstation CPUs are called Epyc. |
| |
| ▲ | juped 3 hours ago | parent [-] | | Yeah. I would say, do get a better CPU, but do also research a bit deeper and really get a better CPU. Threadrippers are borderline workstation, too, though, esp. the pro SKUs. | | |
| ▲ | fmajid an hour ago | parent [-] | | Threadrippers are workstation processors and support ECC, Epycs are servers and the 9950X is HEDT (high end desktop). |
|
|
|
| ▲ | miohtama 3 hours ago | parent | prev [-] |
| Even better way to improve the quality of your computer sessions is "Just use Mac." Apple is so much ahead at the performance curve. |
| |
| ▲ | mgaunard 3 hours ago | parent | next [-] | | They have good performance, especially per watt, for a laptop. Certainly not ahead of the curve when considering server hardware. | | |
| ▲ | nine_k 3 hours ago | parent | next [-] | | Not just that; they have a decent GPU and the unified memory architecture which allows to directly run many ML models locally with good performance. Server hardware is not very portable. Reserving a c7i.large is about $0.14/hour, this would equal the cost of an MBP M3 64GB in about two years. Apple have made a killer development machine, I say this as a person who does not like Apple and macOS. | |
| ▲ | cornholio 3 hours ago | parent | prev [-] | | Apple still has quite atrocious performance per $. So it economically makes sense for a top end developer or designer, but perhaps not the entire workforce let alone the non-professional users, students etc. |
| |
| ▲ | cycomanic 2 hours ago | parent | prev | next [-] | | Funny thing we just talked about this in a thread 2 days ago. Comments like this leads me to dismiss anything coming from apple fan boys. It's not like objective benchmarks disproving these sort of statements don't exist. | |
| ▲ | milesrout 3 hours ago | parent | prev [-] | | [dead] |
|