| ▲ | alphazard 16 hours ago |
| There's an undertone of self-soothing "AI will leverage me, not replace me", which I don't agree with especially in the long run, at least in software.
In the end it will be the users sculpting formal systems like playdoh. In the medium run, "AI is not a co-worker" is exactly right.
The idea of a co-worker will go away.
Human collaboration on software is fundamentally inefficient.
We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.
Software is going to become an individual sport, not a team sport, quickly.
The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.
I would rather a single human (for now) architect with good taste and an army of agents than a team of humans. |
|
| ▲ | GuB-42 2 hours ago | parent | next [-] |
| > In the end it will be the users sculpting formal systems like playdoh. And unless the user is a competent programmer, at least in spirit, it will look like the creation of the 3-year-old next door, not like Wallace and Gromit. It may be fine, but the difference is that one is only loved by their parents, the other gets millions of people to go to the theater. Play-Doh gave the power of sculpting to everyone, including small children, but if you don't want to make an ugly mess, you have to be a competent sculptor to begin with, and it involves some fundamentals that does not depend on the material. There is a reason why clay animators are skilled professionals. The quality of vibe coded software is generally proportional to the programming skills of the vibe coder as well as the effort put into it, like with all software. |
|
| ▲ | thewebguyd 25 minutes ago | parent | prev | next [-] |
| > In the end it will be the users sculpting formal systems like playdoh. I’m very skeptical of this unless the AI can manage to read and predict emotion and intent based off vague natural language. Otherwise you get the classic software problem of “What the user asked for directly isn’t actually what they want/need.” You will still need at least some experience with developing software to actually get anything useful. The average “user” isn’t going to have much success for large projects or translating business logic into software use cases. |
|
| ▲ | Tade0 8 hours ago | parent | prev | next [-] |
| > The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI. Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible. |
| |
| ▲ | xnorswap 7 hours ago | parent | next [-] | | > it can't find actual flaws in your code I can tell from this statement that you don't have experience with claude-code. It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source. It can appear to reason about root causes and issues with sequencing and logic. That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes. | | |
| ▲ | Tade0 6 hours ago | parent | next [-] | | > I can tell from this statement that you don't have experience with claude-code. I happen to use it on a daily basis. 4.6-opus-high to be specific. The other day it surmised from (I assume) the contents of my clipboard that I want to do A, while I really wanted to B, it's just that A was a more typical use case. Or actually: hardly anyone ever does B, as it's a weird thing to do, but I needed to do it anyway. > but it is indistinguishable from actual reasoning I can distinguish it pretty well when it makes mistakes someone who actually read the code and understood it wouldn't make. Mind you: it's great at presenting someone else's knowledge and it was trained on a vast library of it, but it clearly doesn't think itself. | | |
| ▲ | weird-eye-issue 5 hours ago | parent [-] | | What do you mean the content of your clipboard? | | |
| ▲ | Tade0 4 hours ago | parent [-] | | I either accidentally pasted it somewhere and removed, forgetting about doing that or it's reading the clipboard. The suggestion it gave me started with the contents of the clipboard and expanded to scenario A. | | |
| ▲ | elar_verole 3 hours ago | parent [-] | | Sorry to sound rude - but you polluted the context, pointing to the fact you would like A, and then found it annoying it tried to do A ? | | |
|
|
| |
| ▲ | LoganDark 7 hours ago | parent | prev [-] | | What you're describing is not finding flaws in code. It's summarizing, which current models are known to be relatively good at. It is true that models can happen to produce a sound reasoning process. This is probabilistic however (moreso than humans, anyway). There is no known sampling method that can guarantee a deterministic result without significantly quashing the output space (excluding most correct solutions). I believe we'll see a different landscape of benefits and drawbacks as diffusion language models begin to emerge, and as even more architectures are invented and practiced. I have a tentative belief that diffusion language models may be easier to make deterministic without quashing nearly as much expressivity. | | |
| ▲ | MrOrelliOReilly 6 hours ago | parent | next [-] | | This all sounds like the stochastic parrot fallacy. Total determinism is not the goal, and it not a prerequisite for general intelligence. As you allude to above, humans are also not fully deterministic. I don't see what hard theoretical barriers you've presented toward AGI or future ASI. | | |
| ▲ | LoganDark 6 hours ago | parent [-] | | I haven't heard the stochastic parrot fallacy (though I have heard the phrase before). I also don't believe there are hard theoretical barriers. All I believe is that what we have right now is not enough yet. (I also believe autoregressive models may not be capable of AGI.) |
| |
| ▲ | nielsole 6 hours ago | parent | prev | next [-] | | > moreso than humans Citation needed. | | |
| ▲ | LoganDark 6 hours ago | parent [-] | | Much of the space of artificial intelligence is based on a goal of a general reasoning machine comparable to the reasoning of a human. There are many subfields that are less concerned with this, but in practice, artificial intelligence is perceived to have that goal. I am sure the output of current frontier models is convincing enough to outperform the appearance of humans to some. There is still an ongoing outcry from when GPT-4o was discontinued from users who had built a romantic relationship with their access to it. However I am not convinced that language models have actually reached the reliability of human reasoning. Even a dumb person can be consistent in their beliefs, and apply them consistently. Language models strictly cannot. You can prompt them to maintain consistency according to some instructions, but you never quite have any guarantee. You have far less of a guarantee than you could have instead with a human with those beliefs, or even a human with those instructions. I don't have citations for the objective reliability of human reasoning. There are statistics about unreliability of human reasoning, and also statistics about unreliability of language models that far exceed them. But those are both subjective in many cases, and success or failure rates are actually no indication of reliability whatsoever anyway. On top of that, every human is different, so it's difficult to make general statements. I only know from my work circles and friend circles that most of the people I keep around outperform language models in consistency and reliability. Of course that doesn't mean every human or even most humans meet that bar, but it does mean human-level reasoning includes them, which raises the bar that models would have to meet. (I can't quantify this, though.) There is a saying about fully autonomous self driving vehicles that goes a little something like: they don't just have to outperform the worst drivers; they have to outperform the best drivers, for it to be worth it. Many fully autonomous crashes are because the autonomous system screwed up in a way that a human would not. An autonomous system typically lacks the creativity and ingenuity of a human driver. Though they can already be more reliable in some situations, we're still far from a world where autonomous driving can take liability for collisions, and that's because they're not nearly as reliable or intelligent enough to entirely displace the need for human attention and intervention. I believe Waymo is the closest we've gotten and even they have remote safety operators. | | |
| ▲ | throwway120385 2 hours ago | parent | next [-] | | It's not enough for them to be "better" than a human. When they fail they also have to fail in a way that is legible to a human. I've seen ML systems fail in scenarios that are obvious to a human and succeed in scenarios where a human would have found it impossible. The opposite needs to be the case for them to be generally accepted as equivalent, and especially the failure modes need to be confined to cases where a human would have also failed. In the situations I've seen, customers have been upset about the performance of the ML model because the solution to the problem was patently obvious to them. They've been probably more upset about that than about situations where the ML model fails and the end customer also fails. | |
| ▲ | gaigalas 5 hours ago | parent | prev [-] | | That's not a citation. | | |
| ▲ | LoganDark 4 hours ago | parent [-] | | It's roughly why I think this way, along with a statement that I don't have objective citations. So sure, it's not a citation. I even said as much, right in the middle there. |
|
|
| |
| ▲ | michaelscott 6 hours ago | parent | prev [-] | | Nothing you've said about reasoning here is exclusive to LLMs. Human reasoning is also never guaranteed to be deterministic, excluding most correct solutions. As OP says, they may not be reasoning under the hood but if the effect is the same as a tool, does it matter? I'm not sure if I'm up to date on the latest diffusion work, but I'm genuinely curious how you see them potentially making LLMs more deterministic? These models usually work by sampling too, and it seems like the transformer architecture is better suited to longer context problems than diffusion | | |
| ▲ | LoganDark 6 hours ago | parent [-] | | The way I imagine greedy sampling for autoregressive language models is guaranteeing a deterministic result at each position individually. The way I'd imagine it for diffusion language models is guaranteeing a deterministic result for the entire response as a whole. I see diffusion models potentially being more promising because the unit of determinism would be larger, preserving expressivity within that unit. Additionally, diffusion language models iterate multiple times over their full response, whereas autoregressive language models get one shot at each token, and before there's even any picture of the full response. We'll have to see what impact this has in practice; I'm only cautiously optimistic. | | |
| ▲ | michaelscott 6 hours ago | parent [-] | | I guess it depends on the definition of deterministic, but I think you're right and there's strong reason to expect this will happen as they develop. I think the next 5 - 10 years will be interesting! | | |
|
|
|
| |
| ▲ | afro88 34 minutes ago | parent | prev | next [-] | | I would have agreed with you a year ago | |
| ▲ | weego 6 hours ago | parent | prev | next [-] | | And not this or any existing generation of people. We're bad a determining want vs need, being specific, genericizing our goals into a conceptual framework of existing patterns and documenting & explaining things in a way that gets to a solid goal. The idea that the entire top down processes of a business can be typed into an AI model and out comes a result is again, a specific type of tech person ideology that sees the idea of humanity as an unfortunate annoyance in the process of delivering a business. The rest of the world see's it the other way round. | |
| ▲ | 3 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | lpapez 6 hours ago | parent | prev | next [-] | | If you only realized how ridiculous your statement is, you never would have stated it. | | |
| ▲ | jychang 6 hours ago | parent | next [-] | | It's also literally factually incorrect. Pretty much the entire field of mechanistic interpretability would obviously point out that models have an internal definition of what a bug is. Here's the most approachable paper that shows a real model (Claude 3 Sonnet) clearly having an internal representation of bugs in code: https://transformer-circuits.pub/2024/scaling-monosemanticit... Read the entire section around this quote: > Thus, we concluded that 1M/1013764 represents a broad variety of errors in code. (Also the section after "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions") This feature fires on actual bugs; it's not just a model pattern matching saying "what a bug hunter may say next". | | |
| ▲ | mrbungie 5 hours ago | parent [-] | | Was this "paper" eventually peer reviewed? PS: I know it is interesting and I don't doubt Antrophic, but for me it is so fascinating they get such a pass in science. | | |
| ▲ | ACCount37 4 hours ago | parent [-] | | Modern ML is old school mad science. The lifeblood of the field is proof-of-concept pre-prints built on top of other proof-of-concept pre-prints. |
|
| |
| ▲ | pousada 5 hours ago | parent | prev [-] | | Some people are still stuck in the “stochastic parrot” phase and see everything regarding LLMs through that lense. | | |
| ▲ | windexh8er 4 hours ago | parent [-] | | Current LLMs do not think. Just because all models anthropomorphize the repetitive actions a model is looping through does not mean they are truly thinking or reasoning. On the flip side the idea of this being true has been a very successful indirect marketing campaign. | | |
| ▲ | pousada 2 hours ago | parent [-] | | What does “truly thinking or reasoning” even mean for you? I don’t think we even have a coherent definition of human intelligence, let alone of non-human ones. |
|
|
| |
| ▲ | nazgul17 4 hours ago | parent | prev | next [-] | | While I agree, if you think that AI is just a text predictor, you are missing an important point. Intelligence, can be borne of simple targets, like next token predictor. Predicting the next token with the accuracy it takes to answer some of the questions these models can answer, requires complex "mental" models. Dismissing it just because its algorithm is next token prediction instead of "strengthen whatever circuit lights up", is missing the forest for the trees. | |
| ▲ | laichzeit0 4 hours ago | parent | prev | next [-] | | Absolutely nuts, I feel like I'm living in a parallel universe. I could list several anecdotes here where Claude has solved issues for me in an autonomous way that (for someone with 17 years of software development, from embedded devices to enterprise software) would have taken me hours if not days. To the nay sayers... good luck. No group of people's opinions matter at all. The market will decide. | | |
| ▲ | xnorswap 3 hours ago | parent [-] | | I wonder if the parent comments remark is a communication failure or pedantry gone wrong, because like you, claude-code is out there solving real problems and finding and fixing defects. A large quantity of bugs as raised are now fixed by claude automatically from just the reports as written. Everything is human reviewed and sometimes it fixes it in ways I don't approve, and it can be guided. It has an astonishing capability to find and fix defects. So when I read "It can't find flaws", it just doesn't fit my experience. I have to wonder if the disconnect is simply in the definition of what it means to find a flaw. But I don't like to argue over semantics. I don't actually care if it is finding flaws by the sheer weight of language probability rather than logical reasoning, it's still finding flaws and fixing them better than anything I've seen before. | | |
| ▲ | gilbetron 2 hours ago | parent [-] | | I can't control random internet people, but within my personal and professional life, I see the effective pattern of comparing prompts/contexts/harnesses to figure out why some are more effective than others (in fact tooling is being developed in the AI industry as a whole to do so, claude even added the "insights" command). I feel that many people that don't find AI useful are doing things like, "Are there any bugs in this software?" rather than developing the appropriate harness to enable the AI to function effectively. |
|
| |
| ▲ | jatora 7 hours ago | parent | prev | next [-] | | [flagged] | | |
| ▲ | Tade0 6 hours ago | parent | next [-] | | I use these tools and that's my experience. | | |
| ▲ | koonsolo 4 hours ago | parent [-] | | I think it all depends on the use case and a luck factor. Sometimes I instruct copilot/claude to do a development (stretching it's capabilities), and it does amazingly well. Mind you that this is front-end development, so probably one of the more ideal use-cases. Bugfixing also goes well a lot of times. But other times, it really struggles, and in the end I have to write it by hand. This is for more complex or less popular things (In my case React-Three-Fiber with skeleton animations). So I think experiences can vastly differ, and in my environment very dependent on the case. One thing is clear: This AI revolution (deep learning) won't replace developers any time soon. And when the next revolution will take place, is anyones guess. I learned neural networks at university around 2000, and it was old technology then. I view LLM's as "applied information", but not real reasoning. |
| |
| ▲ | Lionga 7 hours ago | parent | prev [-] | | [flagged] | | |
| ▲ | jychang 7 hours ago | parent [-] | | Ok, I'll bite. Let's assume a modern cutting edge model but even with fairly standard GQA attention, and something obviously bigger than just monosemantic features per neuron. Based on any reasonable mechanistic interpretability understanding of this model, what's preventing a circuit/feature with polysemanticity from representing a specific error in your code? --- Do you actually understand ML? Or are you just parroting things you don't quite understand? | | |
| ▲ | Lionga 7 hours ago | parent | next [-] | | Polysemantic features in modern transformer architectures (e.g., with grouped-query attention) are not discretely addressable, semantically stable units but superposed, context-dependent activation patterns distributed across layers and attention heads, so there is no principled mechanism by which a single circuit or feature can reliably and specifically encode “a particular code error” in a way that is isolable, causally attributable, and consistently retrievable across inputs. --- Way to go in showing you want a discussion, good job. | | |
| ▲ | jychang 7 hours ago | parent [-] | | Nice LLM generated text. Now go read https://transformer-circuits.pub/2024/scaling-monosemanticit... or https://arxiv.org/abs/2506.19382 to see why that text is outdated. Or read any paper in the entire field of mechanistic interpretability (from the past year or two), really. Hint: the first paper is titled "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet" and you can ctrl-f for "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions" Who said I want a discussion? I want ignorant people to STOP talking, instead of talking as if they knew everything. |
| |
| ▲ | wamiks 6 hours ago | parent | prev [-] | | Ok, let's chew on that. "reasonable mechanistic interpretability understanding" and "semantic" are carrying a lot of weight. I think nobody understands what's happening in these models -irrespective of narrative building from the pieces. On the macro level, everyone can see simple logical flaws. | | |
| ▲ | jychang 6 hours ago | parent [-] | | > I think nobody understands what's happening in these models Quick question, do you know what "Mechanistic Interpretability Researcher" means? Because that would be a fairly bold statement if you were aware of that. Try skimming through this first: https://www.alignmentforum.org/posts/NfFST5Mio7BCAQHPA/an-ex... > On the macro level, everyone can see simple logical flaws. Your argument applies to humans as well. Or are you saying humans can't possibly understand bugs in code because they make simple logical flaws as well? Does that mean the existence of the Monty Hall Problem shows that humans cannot actually do math or logical reasoning? | | |
| ▲ | dns_snek 2 hours ago | parent [-] | | > do you know what "Mechanistic Interpretability Researcher" means? Because that would be a fairly bold statement if you were aware of that. The mere existence of a research field is not proof of anything except "some people are interested in this". Its certainly doesn't imply that anyone truly understands how LLMs process information, "think", or "reason". As with all research, people have questions, ideas, theories and some of them will be right but most of them are bound to be wrong. |
|
|
|
|
| |
| ▲ | p-e-w 7 hours ago | parent | prev | next [-] | | You’re committing the classic fallacy of confusing mechanics with capabilities. Brains are just electrons and chemicals moving through neural circuits. You can’t infer constraints on high-level abilities from that. | | |
| ▲ | Tade0 6 hours ago | parent [-] | | This goes both ways. You can't assume capabilities based on impressions. Especially with LLMs, which are purpose built to give an impression of producing language. Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced. | | |
| ▲ | AlecSchueler 6 hours ago | parent [-] | | It's true that they only give plausible sounding answers. But let's say we ask a simple question like "What's the sum of two and two?" The only plausible sounding answer to that will be "four." It doesn't need to have any fancy internal understanding or anything else beyond prediction to give what really is the same answer. The same goes for a lot of bugs in code. The best prediction is often the correct answer, being the highlighting of the error. Whether it can "actually find" the bugs—whatever that means—isn't really so important as whether or not it's correct. | | |
| ▲ | Tade0 5 hours ago | parent [-] | | It becomes important the moment your particular bug is on one hand typical, but has a non-typical reason. In such cases you'll get nonsense which you need to ignore. Again - they're very useful, as they give great answers based on someone else's knowledge and vague questions on part of the user, but one has to remain vigilant and keep in mind this is just text presented to you to look as believable as possible. There's no real promise of correctness or, more importantly, critical thinking. | | |
| ▲ | AlecSchueler 3 hours ago | parent [-] | | 100% They're not infallible but that's a different argument to "they can't find bugs in your code." |
|
|
|
| |
| ▲ | ACCount37 5 hours ago | parent | prev [-] | | Your brain is a slab of wet meat, not a logic engine. It can't find actual flaws in your code - it's just half-decent at pattern recognition. | | |
| ▲ | gaigalas 5 hours ago | parent | next [-] | | That is not exactly true. The brain does a lot of things that are not "pattern recognition". Simpler, more mundane (not exactly, still incredibly complicated) stuff like homeostasis or motor control, for example. Additionally, our ability to plan ahead and simulate future scenarios often relies on mechanisms such as memory consolidation, which are not part of the whole pattern recognition thing. The brain is a complex, layered, multi-purpose structure that does a lot of things. | |
| ▲ | mexicocitinluez 5 hours ago | parent | prev [-] | | Its pattern recognition all the way down. |
|
|
|
| ▲ | paulryanrogers 16 hours ago | parent | prev | next [-] |
| This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift. And that there is little value in reusing software initiated by others. |
| |
| ▲ | alphazard 15 hours ago | parent | next [-] | | > This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift. I think there are people who want to use software to accomplish a goal, and there are people who are forced to use software. The people who only use software because the world around them has forced it on them, either through work or friends, are probably cognitively excluded from building software. The people who seek out software to solve a problem (I think this is most people) and compare alternatives to see which one matches their mental model will be able to skip all that and just build the software they have in mind using AI. > And that there is little value in reusing software initiated by others. I think engineers greatly over-estimate the value of code reuse. Trying to fit a round peg in a square hole produces more problems than it solves.
A sign of an elite engineer is knowing when to just copy something and change it as needed rather than call into it.
Or to re-implement something because the library that does it is a bad fit. The only time reuse really matters is in network protocols. Communication requires that both sides have a shared understanding. | | |
| ▲ | fauigerzigerk 8 hours ago | parent | next [-] | | >The only time reuse really matters is in network protocols. Communication requires that both sides have a shared understanding. A lot of things are like network protocols. Most things require communication. External APIs, existing data, familiar user interfaces, contracts, laws, etc. Language itself (both formal and natural) depends on a shared understanding of terms, at least to some degree. AI doesn't magically make the coordination and synchronisation overhead go away. Also, reusing well debugged and battle tested code will always be far more reliable than recreating everything every time anything gets changed. | | |
| ▲ | lioeters 7 hours ago | parent [-] | | Even within a single computer or program, there is need for communication protocols and shared understanding - such as types, data schema, function signatures. It's the interface between functions, programs, languages, machines. It could also be argued that "reuse" doesn't necessarily mean reusing the actual code as material, but reusing the concepts and algorithms. In that sense, most code is reuse of some previous code, written differently every time but expressing the same ideas, building on prior art and history. That might support GP's comment that "code reuse" is overemphasized, since the code itself is not what's valuable, what the user wants is the computation it represents. If you can speak to a computer and get the same result, then no code is even necessary as a medium. (But internally, code is being generated on the fly.) | | |
| ▲ | fauigerzigerk 7 hours ago | parent [-] | | I think we shouldn't get too hung up on specific artifacts. The point is that specifying and verifying requirements is a lot of work. It takes time and resources. This work has to be reused somehow. We haven't found a way to precisely specify and verify requirements using only natural language. It requires formal language. Formal language that can be used by machines is called code. So this is what leads me to the conclusion that we need some form of code reuse. But if we do have formal specifications, implementations can change and do not necessarily have to be reused. The question is why not. | | |
| ▲ | saezbaldo 3 hours ago | parent [-] | | This reframes the whole conversation. If implementations are cheap to regenerate, specifications become the durable artifact. Something like TLA+ model checking lets you verify that a protocol maintains safety invariants across all reachable states, regardless of who wrote the implementation. The hard part was always deciding what "correct" means in your specific domain. Most teams skip formal specs because "we don't have time." If agents make implementations nearly free, that excuse disappears. The bottleneck shifts from writing code to defining correctness. |
|
|
| |
| ▲ | Sharlin 5 hours ago | parent | prev | next [-] | | > I think there are people who want to use software to accomplish a goal, and there are people who are forced to use software. Typically people feel they're "forced" to use software for entirely valid reasons, such as said software being absolutely terrible to use. I'm sure that most people like using software that they feel like actually helps rather than hinders them. | |
| ▲ | RealityVoid 7 hours ago | parent | prev | next [-] | | > The only time reuse really matters is in network protocols. And long term maintenance. If you use something. You have to maintain it. It's much better if someone else maintains it. | |
| ▲ | skydhash 13 hours ago | parent | prev | next [-] | | > I think engineers greatly over-estimate the value of code reuse[...]The only time reuse really matters is in network protocols. The whole idea of an OS is code reuse (and resources management). No need to setup the hardware to run your application. Then we have a lot of foundational subsystems like graphics, sound, input,... Crafting such subsystems and the associated libraries are hard and requires a lot of design thinking. | | | |
| ▲ | jimbokun 13 hours ago | parent | prev [-] | | Which is why we should always just write and train our own LLMs. I mean it’s just software right? What value is there in reusing it if we can just write it ourselves? | | |
| ▲ | bandrami 7 hours ago | parent [-] | | Every internal piece of software you write is a potentially-infinite money sink of training |
|
| |
| ▲ | Thanemate 8 hours ago | parent | prev | next [-] | | >This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift. It's true that at first not everyone is just as efficient, but I'd be lying if I were to claim that someone needs a 4-year degree to communicate with LLM's. | |
| ▲ | calvinmorrison 15 hours ago | parent | prev [-] | | no but if the old '10x developer' is really 1 in 10 or 1 in 100, they might just do fine while the rest of us, average PHP enjoyers, may go to the wayside |
|
|
| ▲ | thwarted 13 hours ago | parent | prev | next [-] |
| > We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. Something Brooks wrote about 50 years ago, and the industry has never fully acknowledged. Throw more bodies at it, be they human bodies or bot agent bodies. |
| |
| ▲ | quietbritishjim 7 hours ago | parent | next [-] | | The point of the mythical man month is not that more people are necessarily worse for a project, it's just that adding them at the last minute doesn't work, because they take a while to get up to speed and existing project members are distracted while trying to help them. It's true that a larger team, formed well in advance, is also less efficient per person, but they still can achieve more overall than small teams (sometimes). | | |
| ▲ | jsumrall 6 hours ago | parent [-] | | Interesting point. And from the agents point of view, it’s always joining at the last minute, and doesn’t stick around longer than its context window. There’s a lesson in there maybe… | | |
| ▲ | saezbaldo 3 hours ago | parent [-] | | The context window is the onboarding period. Every invocation is a new hire reading the codebase for the first time. This is why architecture legibility keeps getting more important. Clean interfaces, small modules, good naming. Not because the human needs it (they already know the codebase) but because the agent has to reconstruct understanding from scratch every single time. Brooks was right that the conceptual structure is the hard part. We just never had to make it this explicit before. |
|
| |
| ▲ | falcor84 13 hours ago | parent | prev [-] | | But there is a level of magnitude difference between coordinating AI agents and humans - the AIs are so much faster and more consistent than humans, that you can (as Steve Yegge [0] and Nicholas Carlini [1] showed) have them build a massive project from scratch in a matter of hours and days rather than months and years. The coordination cost is so much lower that it's just a different ball game. [0] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d... [1] https://www.anthropic.com/engineering/building-c-compiler | | |
| ▲ | jimbokun 13 hours ago | parent | next [-] | | Then why aren’t we seeing orders of magnitude more software being produced? | | |
| ▲ | johnfn 13 hours ago | parent | next [-] | | Didn't we have a post the other day saying that the number of "Show HN" posts is skyrocketing? https://news.ycombinator.com/item?id=47045804 | |
| ▲ | leoedin 6 hours ago | parent | prev | next [-] | | I think we are. There's definitely been an uptick in "show HN" type posts with quite impressively complex apps that one person developed in a few weeks. From my own experience, the problem is that AI slows down a lot as the scale grows. It's very quick to add extra views to a frontend, but struggles a lot more in making wide reaching refactors. So it's very easy to start a project, but after a while your progress slows significantly. But given I've developed 2 pretty functional full stack applications in the last 3 months, which I definitely wouldn't have done without AI assistance, I think it's a fair assumption that lots of other people are doing the same. So there is almost certainly a lot more software being produced than there was before. | | |
| ▲ | datsci_est_2015 5 hours ago | parent [-] | | I think the proportion of new software that is novel has absolutely plummeted after the advent of AI. In my experience, generative AI will easily reproduce code for which there are a multitude of examples on GitHub, like TODO CRUD React Apps. And many business problems can be solved with TODO CRUD React Apps (just look at Excel’s success), but not every business problem can be solved by TODO CRUD React Apps. As an analogy: imagine if someone was bragging about using Gen AI to pump out romantasy smut novels that were spicy enough to get off to. Would you think they’re capable of producing the next Grapes of Wrath? |
| |
| ▲ | bandrami 8 hours ago | parent | prev | next [-] | | This question remains the 900-pound gorilla of this discussion | |
| ▲ | danielbln 13 hours ago | parent | prev | next [-] | | Claude Code released just over a year ago, agentic coding came into its own maybe in May or June of last year. Maybe give it a minute? | | |
| ▲ | ok_dad 13 hours ago | parent [-] | | It’s been a minute and a half and I don’t see the evidence you can task an agent swarm to produce useful software without your input or review. I’ve seen a few experiments that failed, and I’ve seen manic garbage, but not yet anything useful outside of the agent operators imagination. | | |
| ▲ | danielbln 13 hours ago | parent [-] | | Agent swarms are what, a couple of months old? What are you even talking about. Yes, people/humans still drive this stuff, but if you think there isn't useful software out there that can be handily implemented with current gen agents that need very little or no review, then I don't know what to tell you, apart from "you're mistaken". And I say that as someone who uses three tools heavily but has otherwise no stake in them. The copium in this space is real. Everyone is special and irreplaceable, until another step change pushes them out. | | |
| ▲ | dandellion 8 hours ago | parent | next [-] | | The next thing after agent swarms will be swarm colonies and people will go "it's been a month since agentic swarm colonies, give it a month or two". People have been moving the goal posts like that for a couple years now, it's starting to grow stale. This is like self driving cars which were going to be workingin 2016 and replace 80% of drivers by 2017, all over again. People falling for hype instead of admitting that while it appears somewhat useful, nobody has any clue if it's 97% useful or just 3% useful but so far it's looking like the later. | | |
| ▲ | ForHackernews 6 hours ago | parent [-] | | I generally agree, but counterpoint: Waymo is successfully running robocabs in many cities today. | | |
| |
| ▲ | ok_dad 12 hours ago | parent | prev [-] | | The whole point is that an agent swarm doesn’t need a month, supposedly. | | |
| ▲ | quietbritishjim 7 hours ago | parent [-] | | We're talking about whether the human users have caught up with usage of tech, not the speed of the tech itself. |
|
|
|
| |
| ▲ | ukuina 10 hours ago | parent | prev | next [-] | | Why do you assume there isn't? Enterprise (+API) usage of LLMs has continued to grow exponentially. | | |
| ▲ | sensanaty 6 hours ago | parent [-] | | I work for one of those enterprises with lots of people trying out AI (thankfully leadership is actually sane, no mandates that you have to use it, just giving devs access to experiment with the tools and see what happens). Lots of people trying it out in earnest, lots of newsletters about new techniques and all that kinda stuff. Lots of people too, so there's all sorts of opinions from very excited to completely indifferent. Precisely 0 projects are making it out any faster or (IMO more importantly) better. We have a PR review bot clogging up our PRs with fucking useless comments, rewriting the PR descriptions in obnoxious ways, that basically everyone hates and is getting shut off soon. From an actual productivity POV, people are just using it for a quick demo or proof of concept here and there before actually building the proper thing manually as before. And we have all the latest and greatest techniques, all the AGENTS.mds and tool calling and MCP integrations and unlimited access to every model we care to have access to and all the other bullshit that OpenAI et al are trying to shove on people. It's not for a lack of trying, plenty of people are trying to make any part of it work, even if it's just to handle the truly small stuff that would take 5 minutes of work but is just tedious and small enough to be annoying to pick up. It's just not happening, even with extremely simple tasks (that IMO would be better off with a dedicated, small deterministic script) we still need human overview because it often shits the bed regardless, so the effort required to review things is equal or often greater than just doing the damn ticket yourself. My personal favorite failure is when the transcript bots just... Don't transcript random chunks of the conversation, which can often lead to more confusion than if we just didn't have anything transcribed. We've turned off the transcript and summarization bots, because we've found 9/10 times they're actively detrimental to our planning and lead us down bad paths. | | |
| ▲ | stpedgwdgfhgdd 5 hours ago | parent [-] | | I build a code reviewer based on the claude code sdk that integrates with gitlab, pretty straightforward. The hard work is in the integration, not the review itself. That is taken care of with SDK. Devs, even conservative ones, like it. I’ve built a lot of tooling in my life, but i never had the experience that devs reach out to me that fast because it is ‘broken’. (Expired token or a bug for huge MRs) |
|
| |
| ▲ | autoexec 6 hours ago | parent | prev | next [-] | | It doesn't appear to have improved the quality of the software we have either. | |
| ▲ | itemize123 11 hours ago | parent | prev | next [-] | | we are. you can check the APP STORE release yoy. it's skyrocketing. | | |
| ▲ | viking123 6 hours ago | parent | next [-] | | I have barely downloaded any apps in the last 5-10 years except some necessary ones like bank apps etc. Who even needs that garbage? Steam also has tons of games but 80% make like no money at all and no one cares. Just piles of garbage. We already have limited hours per day and those are not really increasing so I wonder where are the users. | |
| ▲ | refactor_master 7 hours ago | parent | prev [-] | | Here’s a talk about leaning into the garbage flow. And that was a decade ago. https://youtu.be/E8Lhqri8tZk I can’t imagine the number being economically meaningful now. |
| |
| ▲ | falcor84 5 hours ago | parent | prev [-] | | "The future is already here, it's just not evenly distributed" |
| |
| ▲ | thwarted 12 hours ago | parent | prev [-] | | > But there is a level of magnitude difference between coordinating AI agents and humans And yet, from https://news.ycombinator.com/item?id=47048599 > One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task. Which sounds pretty much the same as how work is broken down and handed out to humans. | | |
| ▲ | falcor84 5 hours ago | parent [-] | | Yes, but you can do this at the top level, and then have AI agents do this themselves for all the low level tasks, which is then orders of magnitude faster than with human coordination. |
|
|
|
|
| ▲ | mossTechnician 6 hours ago | parent | prev | next [-] |
| Everybody in the world is now a programmer. This is the miracle of artificial intelligence. - Jensen Huang, February 2024 https://www.techradar.com/pro/nvidia-ceo-predicts-the-death-... |
| |
| ▲ | codr7 5 hours ago | parent | next [-] | | God help us! Far from everyone are cut out to be programmers, the technical barrier was a feature if anything. There's a kind of mental discipline and ability to think long thoughts, to deal with uncertainty; that's just not for everyone. What I see is mostly everyone and their gramps drooling at the idea of faking their way to fame and fortune. Which is never going to work, because everyone is regurgitating the same mindless crap. | |
| ▲ | koonsolo 4 hours ago | parent | prev [-] | | The problem I mostly see with non programmers is that they don't really grasp the concept of a consistent system. A lot of people want X, but they also want Y, while clearly X and Y cannot coexist in the same system. |
|
|
| ▲ | overgard 15 hours ago | parent | prev | next [-] |
| Well, without the self soothing I think what's left is pitchforks. |
|
| ▲ | its-kostya 2 hours ago | parent | prev | next [-] |
| How does a single human acquire said "good taste" for architecting? |
|
| ▲ | falcor84 13 hours ago | parent | prev | next [-] |
| > AI will leverage me I think I know what you mean, and I do recall once seeing "this experience will leverage me" as indicating that something will be good for a person, but my first thought when seeing "x will leverage y" is that x will step on top of y to get to their goal, which does seem apt here. |
|
| ▲ | benreesman 13 hours ago | parent | prev | next [-] |
| I'm rounding the corner on a ground's up reimplementation of `nix` in what is now about 34 hours of wall clock time, I have almost all of it on `wf-record`, I'll post a stream, but you can see the commit logs here: https://github.com/straylight-software/nix/tree/b7r6/correct... Everyone has the same ability to use OpenRouter, I have a new event loop based on `io_uring` with deterministic playbook modeled on the Trinity engine, a new WASM compiler, AVX-512 implementations of all the cryptography primitives that approach theoretical maximums, a new store that will hit theoretical maximums, the first formal specification of the `nix` daemon protocol outside of an APT, and I'm upgrading those specifications to `lean4` proof-bearing codegen: https://github.com/straylight-software/cornell. 34 hours. Why can I do this and no one else can get `ca-derivations` to work with `ssh-ng`? |
| |
|
| ▲ | zombot 6 hours ago | parent | prev | next [-] |
| > I would rather a single human (for now) architect with good taste and an army of agents than a team of humans. A human might have taste, but AI certainly doesn't. |
| |
| ▲ | dsego 4 hours ago | parent | next [-] | | It has average taste based on the code it was trained on. For example, every time I attempted to polish the UX it wanted to add a toast system, I abhor toasts as a UX pattern. But it also provided elegant backend designs I hadn't even considered. | |
| ▲ | elevatortrim 6 hours ago | parent | prev [-] | | I’d say AI has better taste than an average human but definitely not the taste you would see in competent people around you. |
|
|
| ▲ | teaearlgraycold 7 hours ago | parent | prev | next [-] |
| Well of course. In the long run AI will do almost all tasks that can be done from a computer. |
|
| ▲ | MattGaiser 8 hours ago | parent | prev [-] |
| > We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. I am surprised at how little this is discussed and how little urgency there is in fixing this if you still want teams to be as useful in the future. Your standard agile ceremonies were always kind of silly, but it can now take more time to groom work than to do it. I can plausibly spend more time scoring and scoping work (especially trivial work) than doing the work. |
| |
| ▲ | georgefrowny 7 hours ago | parent [-] | | It's always been like that. Waterfall development was worse and that's why the Agilists invented Agile. YOLOing code into a huge pile at top speed is always faster than any other workflow at first. The thing is, a gigantic YOLO'd code pile (fake it till you make it mode) used to be an asset as well as a liability. These days, the code pile is essentially free - anyone with some AI tools can shit out MSLoCs of code now. So it's only barely an asset, but the complexity of longer term maintenance is superlinear in code volume so the liability is larger. |
|