| ▲ | Wirth's Revenge(jmoiron.net) | ||||||||||||||||||||||||||||||||||
| 105 points by signa11 11 hours ago | 31 comments | |||||||||||||||||||||||||||||||||||
| ▲ | lateforwork 41 minutes ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Take a look at what was possible in the late 1980s with 8 MB of RAM: https://infinitemac.org/1989/NeXTStep%201.0 You can run NeXTStep in your browser by clicking above link. A couple of weeks ago you could run Framemaker as well. I was blown away by what Framemaker of the late 1980s could do. Today's Microsoft Word can't hold a candle to Framemaker of the late 1980s! Edit: Here's how you start FrameMaker: In Finder go to NextDeveloper > Demos > FrameMaker.app Then open demo document and browse the pages of the demo document. Prepare to be blown away. You could do that in 1989 with like 64 MB of RAM?? In the last 37 years the industry has gone backwards. Microsoft Word has been stagnant due to no competition for the last few decades. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | satvikpendem 6 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
While the author says that much of it can be attributed to the layers of software in between to make it more accessible to people, in my experience most cases are about people being lazy in their developing of applications. For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change! | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | delichon 14 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
> LLMs are still intensely computationally expensive. You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting ... But the computer you have in front of you can perform this calculation a billion times per second. This is a flip side of the bitter lesson. If all attention goes into the AI algorithm, and none goes into the specific one in front of you, the efficiency is abysmal and Wirth gets his revenge. At any scale larger than epsilon, whenever possible LLMs are better leveraged to generate not the answer but the code to generate it. The bitter lesson remains valid, but at a layer of remove. | |||||||||||||||||||||||||||||||||||
| ▲ | nickm12 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I'm not sure what the high-level point of the article is, but I agree with the observation that we (programmers) should generally prefer using AI agents to write correct, efficient programs to do what we want, to have agents do that work. Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program? I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter. | |||||||||||||||||||||||||||||||||||
| ▲ | cadamsdotcom 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
The actual constraint is how long people are willing to wait for results. If the results are expected to be really good, people will wait a seriously long time. That’s why engineers move on to the next feature as soon as the thing is working - people simply don’t care if it could be faster, as long as it’s not too slow. It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | emsign 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
LLMs are a very cool and powerful tool when you've learned how to use them effectively. But most people probably didn't and thus use them in a way that they produce unsatisfying results while maximizing resource and token use. The cause of that is the companies with the big models are actually in the token selling business, marketing their models as all around problem solvers and life improvers. | |||||||||||||||||||||||||||||||||||
| ▲ | xnorswap 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
An interesting article and it was refreshing to read something that had absolutely no hallmarks of LLM retouching or writing. It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them. Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026? | |||||||||||||||||||||||||||||||||||
| ▲ | firmretention 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
The Reiser footnote was on point. I couldn't resist clicking it to find out if it was the same Reiser I was thinking about. | |||||||||||||||||||||||||||||||||||
| ▲ | jokoon 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Hardware is cheaper than programmers Maybe one day that will change | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | gostsamo 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I suspect that the next generation of agenticly trained llm-s will have a mode where they first consider solving the problem by writing a program first before doing stuff by hand. At least, it would be interesting if in a few months the llm greets me with "Keep in mind that I run best on ubuntu with uv already installed!". | |||||||||||||||||||||||||||||||||||
| ▲ | dist-epoch 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Wirth was complaining about the bloated text editors of the time which used unfathomable amounts of memory - 4 MB. Today the same argument is rehashed - it's outrageous that VS Code uses 1 GB of RAM, when Sublime Text works perfectly in a tiny 128 MB. But notice that the tiny/optimized/good-behaviour of today, 128 MB, is 30 times larger than the outrageous decadent amount from Wirth's time. If you told Wirth "hold my bear", my text-editor needs 128 MB he would just not comprehend such a concept, it would seem like you have no idea what numbers mean in programming. I can't wait for the day when programmers 20 years from now will talk about the amazingly optimized editors of today - VS Code, which lived in a tiny 1 GB of RAM. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | jongjong 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
We haven't yet lost the war against complexity. We would know if we had, because all software would grind to a halt due to errors. We're getting close though; some aspects of software feels deeply dysfunctional; like 2FA and Captcha - They're perfect examples of trying to improve something (security) by adding complexity... And it fails spectacularly... It fails especially hard because those people who made the decision to force these additional hurdles on users are still convinced that they're useful because they have a severely distorted view of the average person's reality. Their trade-off analysis is completely out of whack. The root problem with 2FA is that the average computer is full of vulnerabilities and cannot be trusted 100% so you need a second device just in case the computer was hacked... But it's not particularly useful because if someone infected your computer with a virus, they can likely also infect your phone the next time you plug it in to your computer to charge it... It's not quite 2-factor... So much hassle for so little security benefit... Especially for the average person who is not a Fortune 500 CEO. Company CEOs have a severely distorted view about how often the average person is targeted by scammers and hackers. Last time someone tried to scam me was 10 years ago... The pain of having to pull up my phone every single day, multiple times per day to type in a code is NOT WORTH the tiny amount of security it adds in my case. The case of security is particularly pernicious because complexity has an adverse impact on security; so trying to improve security by adding yet more complexity is extremely unwise... Eventually the user loses access to the software altogether. E.g. they forgot their password because they were forced to use some weird characters as part of their password or they downloaded a fake password manager which turned out to be a virus, or they downloaded a legitimate password manager like Lastpass which was hacked because obviously, they'd be a popular target for hackers... Even if everything goes perfectly and the user is so deeply conditioned that they don't mind using a password manager... Their computer may crash one day and they may lose access to all their passwords... Or the company may require them to change their password after 6 month and the password manager misses the update and doesn't know the new password and the user isn't 'approved' to use the 'forgot my password' feature... Or the user forgets their password manager's master password and when they try to recover it via their email, they realize that the password for their email account is inside the password manager... It's INFURIATING!!! I could probably write the world's most annoying book just listing out all the cascading layers of issues that modern software suffers from. The chapter on security alone would be longer than the entire Lord of the Rings series... And the average reader would probably rather throw themselves into the fiery pits of Mordor than finish reading that chapter... Yet for some bizarre reason, they don't seem to mind EXPERIENCING these exact same cascading failures in their real day-to-day life. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | casey2 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
It's inevitable even if it's unnecessary. Capitalism necessitates 6% growth year on year. Since IT services are the growth sector of course 25% of power will go to data centers in 2040 The EU should do a radical social restructuring betting on no growth. Perhaps even banning all American tech. A modern Tokugawa. | |||||||||||||||||||||||||||||||||||
| ▲ | iberator 5 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Dull article with no point, numbers or anything of values. Just some quasi philosophical mumbing. Wasted like 10 minutes and I'm still not sure what was the point of the article | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||