| ▲ | hintymad 19 hours ago |
| In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns? |
|
| ▲ | e40 18 hours ago | parent | next [-] |
| > Boris said that writing code is a solved problem That's just so dumb to say. I don't think we can trust anything that comes out of the mouths of the authors of these tools. They are conflicted. Conflict of interest, in society today, is such a huge problem. |
| |
| ▲ | chrisjj 2 hours ago | parent | next [-] | | > That's just so dumb to say Depends. Its true of dumb code and dumb coders. Anorher reason why yes, smart pepple should not trust. | |
| ▲ | shimman 16 hours ago | parent | prev | next [-] | | There are bloggers that can't even acknowledge that they're only invited out to big tech events because they'll glaze them up to high heavens. Reminds me of that famous exchange, by noted friend of Jeffrey Epstein, Noam Chomsky: "I’m not saying you’re self-censoring. I’m sure you believe everything you say. But what I’m saying is if you believed something different you wouldn’t be sitting where you’re sitting." | |
| ▲ | timacles 17 hours ago | parent | prev [-] | | Its all basically: Sensationalist take to shock you and get attention |
|
|
| ▲ | fhub 18 hours ago | parent | prev | next [-] |
| He is likely working on a very clean codebase where all the context is already reachable or indexed. There are probably strong feedback loops via tests. Some areas I contribute to have these characteristics, and the experience is very similar to his. But in areas where they don’t exist, writing code isn’t a solved problem until you can restructure the codebase to be more friendly to agents. Even with full context, writing CSS in a project where vanilla CSS is scattered around and wasn’t well thought out originally is challenging. Coding agents struggle there too, just not as much as humans, even with feedback loops through browser automation. |
| |
| ▲ | pseudosavant 17 hours ago | parent | next [-] | | It's funny that "restructure the codebase to be more friendly to agents" aligns really well with what we have "supposed" to have been doing already, but many teams slack on: quality tests that are easy to run, and great documentation. Context and verifiability. The easier your codebase is to hack on for a human, the easier it is for an LLM generally. | | |
| ▲ | cromka 10 hours ago | parent | next [-] | | Turns out the single point of failure irreplaceable type of employees who intentionally obfuscated the projects code for the last 10+ years were ahead of their time. | |
| ▲ | jimbokun 13 hours ago | parent | prev | next [-] | | It’s really interesting. It suggests that intelligence is intelligence, and the electronic kind also needs the same kinds of organization that humans do to quickly make sense of code and modify it without breaking something else. | |
| ▲ | giancarlostoro 16 hours ago | parent | prev [-] | | I had this epiphany a few weeks ago, I'm glad to see others agreeing. Eventually most models will handle large enough context windows where this will sadly not matter as much, but it would be nice for the industry to still do everything to make better looking code that humans can see and appreciate. |
| |
| ▲ | michaelbuckbee 13 hours ago | parent | prev | next [-] | | Having picked up a few long neglected projects in the past year, AI has been tremendous in rapidly shipping quality of dev life stuff like much improved test suites, documenting the existing behavior, handling upgrades to newer framework versions, etc. I've really found it's a flywheel once you get going. | |
| ▲ | swordsith 18 hours ago | parent | prev | next [-] | | Truth. I've had much easier time grappling with code bases I keep clean and compartmentalized with AI, over-stuffing context is one of the main killers of its quality. | |
| ▲ | jimbokun 13 hours ago | parent | prev | next [-] | | All those people who thought clean well architected code wasn’t important…now with LLMs modifying code it’s even more important. | |
| ▲ | chrisjj 2 hours ago | parent | prev [-] | | > He is likely working on ... a laundry list phone app. |
|
|
| ▲ | layer8 17 hours ago | parent | prev | next [-] |
| I think you mean software engineering, not computer science. And no, I don’t think there is reason for software engineering (and certainly not for computer science) to be plateauing. Unless we let it plateau, which I don’t think we will. Also, writing code isn’t a solved problem, whatever that’s supposed to mean. Furthermore, since the patterns we use often aren’t orthogonal, it’s certainly not a linear combination. |
| |
| ▲ | hintymad 16 hours ago | parent [-] | | I assume that new business scenarios will drive new workflows, which requires new work of software engineering. In the meantime, I assume that computer science will drive paradigm shift, which will drive truly different software engineering practice. If we don't have advances in algorithms, systems, and etc, I'd assume that people can slowly abstract away all the hard parts, enabling AI to do most of our jobs. |
|
|
| ▲ | biztos 18 hours ago | parent | prev | next [-] |
| Or does the field become plateaued because engineers treat "writing code" as a "solved problem?" We could argue that writing poetry is a solved problem in much the same way, and while I don't think we especially need 50,000 people writing poems at Google, we do still need poets. |
| |
| ▲ | hintymad 18 hours ago | parent [-] | | > we especially need 50,000 people writing poems at Google, we do still need poets. I'd assume that an implied concern of most engineers is how many software engineers the world will need in the future. If it's the situation like the world needing poets, then the field is only for the lucky few. Most people would be out of job. |
|
|
| ▲ | stephencoyner 16 hours ago | parent | prev | next [-] |
| I saw Boris give a live demo today. He had a swarm of Claude agents one shot the most upvoted open issue on Excalidraw while he explained Claude code for about 20 minutes. No lines of code written by him at all. The agent used Claude for chrome to test the fix in front of us all and it worked. I think he may be right or close to it. |
|
| ▲ | GeoAtreides 16 hours ago | parent | prev | next [-] |
| >writing code is a solved problem sure is news for the models tripping on my thousands of LOC jquery legacy app... |
| |
| ▲ | nake89 5 hours ago | parent [-] | | Could the LLM rewrite it from scratch? | | |
| ▲ | GeoAtreides 3 hours ago | parent [-] | | boss, the models can't even get all the api endpoints from a single file and you want to rewrite everything?! not to mention that maybe the stakeholders don't want a rewrite, they just to modernize the app and add some new features |
|
|
|
| ▲ | gip 16 hours ago | parent | prev | next [-] |
| My prediction: soon (e.g. a few years) the agents will be the one doing the exploration and building better ways to write code, build frameworks,... replacing open source. That being said software engineers will still be in the loop. But there will be far less of them. Just to add: this is only the prediction of someone who has a decent amount of information, not an expert or insider |
| |
| ▲ | overgard 15 hours ago | parent [-] | | I really doubt it. So far these things are good at remixing old ideas, not coming up with new ones. | | |
| ▲ | danielbln 13 hours ago | parent [-] | | Generally us humans come up with new things by remixing old ideas. Where else would they come from? We are synthesizing priors into something novel. If you break the problem space apart enough, I don't see why some LLM can't do the same. | | |
| ▲ | tovej 6 hours ago | parent [-] | | LLM's cannot synthesize text, they can only concatenate or mix statistically. Synthesis requires logical reasoning. That's not how LLMs work. | | |
| ▲ | danielbln 6 hours ago | parent [-] | | Yes it is, LLMs perform logical multi step reasoning all the time, see math proofs, coding etc. And whether you call it synthesis or statistical mixing is just semantics. Do LLMs truly understand? Who knows, probably not, but they do more than you make it out to be. |
|
|
|
|
|
| ▲ | giancarlostoro 16 hours ago | parent | prev | next [-] |
| There's so many timeless books on how to write software, design patterns, lessons learned from production issues. I don't think AI will stop being used for open source, in fact, with the number of increasing projects adjusting their contributor policies to account for AI I would argue that what we'll see is always people who love to hand craft their own code, and people who use AI to build their own open source tooling and solutions. We will also see an explosion is needing specs for things. If you give a model a well defined spec, it will follow it. I get better results the more specific I get about how I want things built and which libraries I want used. |
|
| ▲ | ochronus 9 hours ago | parent | prev | next [-] |
| The creator of the hammer says driving nails into wood planks is a solved problem. Carpenters are now obsolete. |
|
| ▲ | cheema33 17 hours ago | parent | prev | next [-] |
| > is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns? Computer science is different from writing business software to solve business problems. I think Boris was talking about the second and not the first. And I personally think he is mostly correct. At least for my organization. It is very rare for us to write any code by hand anymore. Once you have a solid testing harness and a peer review system run by multiple and different LLMs, you are in pretty good shape for agentic software development. Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures. |
| |
| ▲ | paulryanrogers 15 hours ago | parent [-] | | > Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures. Possible. Yet that's a pretty broad brush. It could also be that some businesses are more heavily represented in the training set. Or some combo of all the above. |
|
|
| ▲ | stuaxo 16 hours ago | parent | prev | next [-] |
| "Writing code is a solved problem"
disagree. Yes, there are common parts to everything we do, at the same time - I've been doing this for 25 years and most of the projects have some new part to them. |
| |
| ▲ | danielbln 13 hours ago | parent [-] | | Novel problems are usually a composite of simpler and/or older problems that have been solved before. Decomposition means you can rip most novel problems apart and solve the chunks. LLMs do just fine with that. |
|
|
| ▲ | jacquesm 13 hours ago | parent | prev | next [-] |
| Prediction: open source will stop. Sure, people did it for the fun and the credits, but the fun quickly goes out of it when the credits go to the IP laundromat and the fun is had by the people ripping off your code. Why would anybody contribute their works for free in an environment like that? |
| |
| ▲ | pu_pe 9 hours ago | parent | next [-] | | I believe the exact opposite. We will see open source contributions skyrocket now. There are a ton of people who want to help and share their work, but technical ability was a major filter. If the barrier to entry is now lowered, expect to see many more people sharing stuff. | | |
| ▲ | jacquesm 7 hours ago | parent [-] | | Yes, more people will be sharing stuff. And none of it will have long term staying power. Or do you honestly believe that a project like GCC or Linux would have been created and maintained over as long as they have been by the use of AI tools in the hands of noobs? Technical ability is an absolute requirement for the production of quality work. If the signal drowns in the noise then we are much worse off than where we started. | | |
| ▲ | signatoremo 3 hours ago | parent | next [-] | | I’m sure you know the majority of GCC and Linux contributors aren’t volunteers, but employees who are paid to contribute. I’m struggling to name a popular project that it isn’t the case. Can you? If AI is powerful enough to flood open source projects with low quality code, it will be powerful enough to be used as gatekeeper. Major players who benefit from OSS, says Google, will make sure of that. We don’t know how it will play out. It’s shortsighted to dismiss it all together. | |
| ▲ | pu_pe 6 hours ago | parent | prev [-] | | Ok but now you have raised the bar from "open source" to "quality work" :) Even then, I am not sure that changes the argument. If Linus Torvalds had access to LLMs back then, why would that discourage him from building Linux? And we now have the capability of building something like Linux with fewer man-hours, which again speaks in favor of more open source projects. |
|
| |
| ▲ | orangecoffee 13 hours ago | parent | prev [-] | | Many did it for liberty - a philosophical position on freedom in software. They're supercharged with AI. |
|
|
| ▲ | yourapostasy 17 hours ago | parent | prev | next [-] |
| Even as the field evolves, the phoning home telemetry of closed models creates a centralized intelligence monopoly. If open source atrophies, we lose the public square of architectural and design reasoning, the decision graph that is often just as important as the code. The labs won't just pick up new patterns; they will define them, effectively becoming the high priests of a new closed-loop ecosystem. However, the risk isn't just a loss of "truth," but model collapse. Without the divergent, creative, and often weird contributions of open-source humans, AI risks stagnating into a linear combination of its own previous outputs. In the long run, killing the commons doesn't just make the labs powerful. It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale. Humans will likely continue to drive consensus building around standards. The governance and reliability benefits of open source should grow in value in an AI-codes-it-first world. |
| |
| ▲ | hintymad 17 hours ago | parent [-] | | > It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale. My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future. For instance, implementation of low-level networking code can be the combination of patterns of zeromq. The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead. |
|
|
| ▲ | therealpygon 18 hours ago | parent | prev | next [-] |
| I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI. I also have to agree, I find myself more and more lately laughing about just how much resources we waste creating exactly the same things over and over in software. I don’t mean generally, like languages, I mean specifically. How many trillions of times has a form with username and password fields been designed, developed, had meetings over, tested, debugged, transmitted, processed, only to ultimately be re-written months later? I wonder what all we might build instead, if all that time could be saved. |
| |
| ▲ | hintymad 18 hours ago | parent [-] | | > I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI. Yeah, hence my question can only be hypothetical. > I wonder what all we might build instead, if all that time could be saved If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope. | | |
| ▲ | Gormo 15 hours ago | parent [-] | | > If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope. I'm not sure I agree with the application of the broken-window theory here. That's a metaphor intended to counter arguments in favor of make-work projects for economic stimulus: the idea here is that breaking a window always has a net negative on the economy, since even though it creates demand for a replacement window, the resources that are necessary to replace a window that already existed are just being allocated to restore the status quo ante, but the opportunity cost of that is everything else the same resources might have bee used for instead, if the window hadn't been broken. I think that's quite distinct from manufacturing new windows for new installations, which is net positive production, and where newer use cases for windows create opportunities for producers to iterate on new window designs, and incrementally refine and improve the product, which wouldn't happen if you were simply producing replacements for pre-existing windows. Even in this example, lots of people writing lots of different variations of login pages has produced incremental improvements -- in fact, as an industry, we haven't been writing the same exact login page over and over again, but have been gradually refining them in ways that have evolved their appearance, performance, security, UI intuitiveness, and other variables considerably over time. Relying on AI to design, not just implement, login pages will likely be the thing that causes this process to halt, and perpetuate the status quo indefinitely. |
|
|
|
| ▲ | sensanaty 6 hours ago | parent | prev | next [-] |
| > Boris said that writing code is a solved problem. No way, the person selling a tool that writes code says said tool can now write code? Color me shocked at this revelation. Let's check in on Claude Code's open issues for a sec here, and see how "solved" all of its issues are? Or my favorite, how their shitty React TUI that pegs modern CPUs and consumes all the memory on the system is apparently harder to get right than Video Games! Truly the masters of software engineering, these Anthropic folks. |
|
| ▲ | 15 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | groby_b 18 hours ago | parent | prev | next [-] |
| That is the same team that has an app that used React for TUI, that uses gigabytes to have a scrollback buffer, and that had text scrolling so slow you could get a coffee in between. And that then had the gall to claim writing a TUI is as hard as a video game. (It clearly must be harder, given that most dev consoles or text interfaces in video games consistently use less than ~5% CPU, which at that point was completely out of reach for CC) He works for a company that crowed about an AI-generated C compiler that was so overfitted, it couldn't compile "hello world" So if he tells me that "software engineering is solved", I take that with rather large grains of salt. It is far from solved. I say that as somebody who's extremely positive on AI usefulness. I see massive acceleration for the things I do with AI. But I also know where I need to override/steer/step in. The constant hypefest is just vomit inducing. |
| |
| ▲ | mccoyb 17 hours ago | parent [-] | | I wanted to write the same comment. These people are fucking hucksters. Don’t listen to their words, look at their software … says all you need to know. |
|
|
| ▲ | overgard 15 hours ago | parent | prev [-] |
| Even if you like them, I don't think there's any reason to believe what people from these companies say. They have every reason to exaggerate or outright lie, and the hype cycle moves so quickly that there are zero consequences for doing so. |