| ▲ | idle_zealot 2 hours ago | |
Isn't the solution to that standardizing on good-enough implementations of common data structures, algorithms, patterns, etc? Then those shared implementations can be audited, iteratively improved, critiqued, etc. For most cases, actual application code should probably be a small core of businesses logic gluing together a robust set of collectively developed libraries. What the LLM-driven approach does is basically the same thing, but with a lossy compression of the software commons. Surely having a standard geospatial library is vastly preferable to each and every application generating its own implementation? | ||
| ▲ | raincole 2 hours ago | parent [-] | |
I mean, of course libraries are great. But the process to create a standardized, widely accepted library/framework usually involves with another kind of accidental complexity: the "designed by committee" complexity. Every user, and every future user will have different ideas about how it should work and what options it should support. People need to communicate their opinions to the maintainers, and sometimes it can even get political. At the end, the 80% features and options will bloat the API and documentation, creating another layer of accidental activity: every user will need to rummage through the doc and something source code to find the 20% they need. Figuring how to do what you want with ImageMagick or FFmpeg always involved with a lot of reading time before LLM. (These libraries are so huge that I think most people only use more like 2% instead of 20% of them.) Anyway, I don't claim AI would eliminate all the accidental activities and the current LLM surely can't. But I do think there are an enormous amount of them in software development. | ||