Remix.run Logo
mlsu 4 hours ago

Fred Brooks, from "No Silver Bullet" (1986)

> All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

AI, the silver bullet. We just never learn, do we?

idle_zealot 4 hours ago | parent | next [-]

There are mixed views here. Some are making the claim relevant to the Silver Bullet observation, than LLMs are cutting down time spent on non-essential work. But the view that's really driving hype is that the machine can do essential work, design the system for you, and implement it, explore the possibility space and make judgments about the tradeoffs, and make decisions.

Now, can it actually do those things? Not in my estimation. But from the perspective of a less experienced developer it can sure look like it does. It is, after all, primarily a plausibility engine.

I'm all for investing in integrating these generative tools into workflows, but as of yet they should not be given agency, or even the aesthetic appearance of agency. It's too tempting to the human brain to shut down when it looks like someone or something else is driving and you're just navigating and correcting.

And eventually, with a few more breakthroughs in architecture maybe this tech actually will make digital people who can do all the programming work, and we can all retire (if we're still alive). Until then, we need to defend against sleepwalking into a future run by dumb plausibility-generators being used as accountability sinks.

charcircuit 3 hours ago | parent [-]

>Now, can it actually do those things? Not in my estimation

Just today I asked my clawbot to generate a daily report for me and it was able to build an entire scraping skill for itself to use for making the report. It designed it along with making decisions along the way including changing data sources when it realized one it was trying was blocking it as a bot.

raincole 4 hours ago | parent | prev [-]

I think software was indeed 9/10 accidental activities before AI. Probably still mostly accidental activities with the current LLM.

The essence: query all the users within a certain area and do it as fast as possible

The accident: spending an hour to survey spatial tree library, another hour debating whether to make our own, one more hour reading the algorithm, a few hours to code it, a few days to test and debug it

Many people seem to believe implementing the algorithm is "the essence" of software development so they think the essence is the majority. I strongly disagree. Knowing and writing the specific algorithm is purely accidental in my opinion.

idle_zealot 4 hours ago | parent | next [-]

Isn't the solution to that standardizing on good-enough implementations of common data structures, algorithms, patterns, etc? Then those shared implementations can be audited, iteratively improved, critiqued, etc. For most cases, actual application code should probably be a small core of businesses logic gluing together a robust set of collectively developed libraries.

What the LLM-driven approach does is basically the same thing, but with a lossy compression of the software commons. Surely having a standard geospatial library is vastly preferable to each and every application generating its own implementation?

raincole 3 hours ago | parent [-]

I mean, of course libraries are great. But the process to create a standardized, widely accepted library/framework usually involves with another kind of accidental complexity: the "designed by committee" complexity. Every user, and every future user will have different ideas about how it should work and what options it should support. People need to communicate their opinions to the maintainers, and sometimes it can even get political.

At the end, the 80% features and options will bloat the API and documentation, creating another layer of accidental activity: every user will need to rummage through the doc and something source code to find the 20% they need. Figuring how to do what you want with ImageMagick or FFmpeg always involved with a lot of reading time before LLM. (These libraries are so huge that I think most people only use more like 2% instead of 20% of them.)

Anyway, I don't claim AI would eliminate all the accidental activities and the current LLM surely can't. But I do think there are an enormous amount of them in software development.

etamponi 4 hours ago | parent | prev [-]

It that's the essence, then of course 9/10 is accident. I think that's not software engineering though.

The essence: I need to make this software meet all the current requirements while making it easy to modify in the future.

The accident: ?

Said another way: everyone agrees that LLMs make it very easy to build throw away code and prototypes. I could build these kind of things when I was 15, when I still was on a 56k internet connection and I only knew a bit of C and html. But that's not what software engineers (even junior software engineers) need to do.