| ▲ | watzon 3 hours ago |
| I think this article makes a valid point. However, if AI coding is considered gambling, then being a project manager overseeing multiple developers could also be seen as a form of gambling to a certain degree. In reality, there isn't much difference between the two. AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results. |
|
| ▲ | yoyohello13 3 hours ago | parent | next [-] |
| I think the addiction angle seems to make AI coding more similar to gambling. Some people seem to be disturbingly addicted to agentic coding. Much more so than traditional programming. To the point of doing destructive things like waking up in the middle of the night to check agents. Or giving an agent access to their bank account. |
| |
| ▲ | shepherdjerred 2 hours ago | parent | next [-] | | I mean, it’s just so fun. Claude wrote a native macOS app for me today. I don’t think I’d describe my behavior as destructive though | |
| ▲ | deadbabe 2 hours ago | parent | prev [-] | | I know at least one case where the obsession with agents ruined a marriage. |
|
|
| ▲ | m00x 3 hours ago | parent | prev | next [-] |
| AI coding is gambling on slot machines, managing developers is betting on race horses. |
| |
| ▲ | SkyPuncher 3 hours ago | parent | next [-] | | Only if your AI coding approach is the slot machine approach. I've ended up with a process that produces very, very high quality outputs. Often needing little to no correct from me. I think of it like an Age of Empires map. If you go into battle surrounded by undiscovered parts of the map, you're in for a rude surprise. Winning a battle means having clarity on both the battle itself and risks next to the battle. | | |
| ▲ | murkt 3 hours ago | parent | next [-] | | Good analogy! Would be interesting to read more details about how you’re getting very high quality outputs | |
| ▲ | Obscurity4340 2 hours ago | parent | prev | next [-] | | Would you mind sharing some of your findings? | |
| ▲ | input_sh 2 hours ago | parent | prev [-] | | Until it produces predictable output, it's gambling. But it can't produce predictable output because it's a non-deterministic tool. What you're describing is increasing your odds while gambling, not that it's not gambling. Card counting also increases your odds while gambling, but it doesn't make it not gambling. | | |
| ▲ | IanCal 2 hours ago | parent | next [-] | | This is a pretty wild comparison in my opinion, it counts almost everything as gambling which means it has almost no use as a definition. The most obvious issue is it’d class working with humans as gambling. Fine if you want to make that as your definition but it seems unhelpful to the discussion. | | |
| ▲ | input_sh a minute ago | parent | next [-] | | You seem to have a fundamental issue understanding what the term deterministic even means. If you give the same trivial task to the same human five times in a row, let's say wash the dishes, your dishes are either gonna be equally clean or equally not clean enough every time. If you run the same script five times in a row while changing some input variables, you're gonna get the same, predictable output that you can understand, look at the code, and fix. If you ask the same question to the same LLM model five times in a row, are you getting the same result every time? Is it kind of random? Can the quality be vastly different if you reject all of its changes, start a new conversation, and tell it to do the same thing using the exact same prompt? Congrats, that's gambling. It's no different than spinning a slot machine, you pass it an input and hope for the best as the output. Unlike the slot machine, you can influence those odds by asking better, but that does not mean it's not gambling. | |
| ▲ | RhythmFox an hour ago | parent | prev [-] | | How does it 'count almost everything as gambling'? They just said 'non-deterministic' output is gambling-like, that is not 'almost everything'. Most computation that you use on a day-to-day basis (depending on how much you use AI now I suppose) is in all ways deterministic. Using probabilistic algorithms is not new, but it your point is not clicking... | | |
| ▲ | organsnyder an hour ago | parent [-] | | Working with humans is decidedly not deterministic, though. And the discussion here is comparing AI coding agents and humans. | | |
| ▲ | RhythmFox 39 minutes ago | parent [-] | | That starts to get into a very philosophical space talking about human action as deterministic or not. I think keeping to the fact that the artifacts (ie code) we are working off will have deterministic effects (unless we want it not to) is exactly the point. That is what lets chaotic human brains communicate with machines at all. Adding more chaos to the system doesn't strike me as obviously an improvement. |
|
|
| |
| ▲ | darkhorse222 2 hours ago | parent | prev [-] | | Similar to quantum computing, a probabilistic model when condensed to sufficiently narrow ranges can be treated as discrete. |
|
| |
| ▲ | bazmattaz 3 hours ago | parent | prev | next [-] | | Dam this is so accurate. As a project manager turned product manager this is so true. You need to estimate a project based on the “pedigree” of your engineers | |
| ▲ | munk-a 2 hours ago | parent | prev | next [-] | | Would it make us uncomfortable to reword the above example to > AI coding is gambling on slot machines, managing developers is gambling on the stock market. Because I feel like that is a much more apt analogy. | |
| ▲ | cko 2 hours ago | parent | prev | next [-] | | What is it with you guys and stallions? | | |
| ▲ | deadbabe 2 hours ago | parent [-] | | There is a long history of managers just wanting to work their developers like horses. |
| |
| ▲ | 2 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | edu 2 hours ago | parent | prev [-] | | Great analogy, I’m saving it! |
|
|
| ▲ | MeetingsBrowser 3 hours ago | parent | prev | next [-] |
| You (in theory) have more control over the quality of the team you are managing, than the quality of the models you are using. And the quality of code models puts out is, in general, well below the average output of a professional developer. It is however much faster, which makes the gambling loop feel better. Buying and holding a stock for a few months doesn't feel the same as playing a slot machine. |
| |
| ▲ | PaulHoule 3 hours ago | parent | next [-] | | One difference is those developers are moral subjects who feel bad if they screw up whereas a computer is not a moral subject and can never be held accountable. https://simonwillison.net/2025/Feb/3/a-computer-can-never-be... | | |
| ▲ | ponector 2 hours ago | parent [-] | | Right, you need to hire a scapegoat. Usually tester has that role: little impact but huge responsibility for quality. |
| |
| ▲ | est31 3 hours ago | parent | prev | next [-] | | You have a lot of control over LLM quality. There is different models available. Even with different effort settings of those models you have different outcomes. E.g. look at the "SWE-Bench Pro (public)" heading in this page: https://openai.com/index/introducing-gpt-5-4/ , showing reasoning efforts from none to high. Of course, they don't learn like humans so you can't do the trick of hiring someone less senior but with great potential and then mentor them. Instead it's more of an up front price you have to pay. The top models at the highest settings obviously form a ceiling though. | | |
| ▲ | MeetingsBrowser 2 minutes ago | parent | next [-] | | Imagine you opened a job posting and had all applicants complete SWE-bench. Ignoring the useless/unqualified candidates and models, human applicants have a much wider range of talent for you to choose from than the top models + tooling. The frontier models + tooling are, in the grand scheme of things, basically equivalent at any given moment. Humans can be just as bad as the worst models, but models are no where near as good as the best humans. | |
| ▲ | kraemahz 2 hours ago | parent | prev [-] | | You also have control over the workflow they follow and the standards you expect them to stick through, through multiple layers of context. Expecting a model to understand your workflow and standards without doing the effort of writing them down is like expecting a new hire to know them without any onboarding. Allowing bad AI code into your production pipeline is a skill issue. |
| |
| ▲ | tossandthrow 2 hours ago | parent | prev [-] | | What theory is that? My experience is the absolute opposite. I am much more in control of quality with Ai agents. I am never letting junior to midlevels into my team again. In fact, I am not sure I will allow any form of manual programming in a year or so. | | |
| ▲ | MeetingsBrowser 2 hours ago | parent | next [-] | | > I am never letting junior to midlevels into my team again Exactly. You control the quality of the people in your team. You can train, fire, hire, etc until you get the skill level you want. You have effectively no control over the quality of the output from an LLM. You get what the frontier labs give you and must work with that. | | |
| ▲ | tossandthrow an hour ago | parent [-] | | That is not correct. It is much easier to control quality of an Ai than of inexperienced developers. | | |
| ▲ | MeetingsBrowser 7 minutes ago | parent [-] | | I think we are talking past each other. > I am never letting junior to midlevels into my team again My point is, you control the experience level of the engineers on your team. The fact that you can say you won't let junior or midlevels on your team proves that. You do not have that level of control with LLMs. Anthropic and OpenAI are roughly the same quality at any given time. The rest are not useful. |
|
| |
| ▲ | DrJokepu 2 hours ago | parent | prev [-] | | Eh. You want a good mix of experience levels, what really matters is everyone should be talented. Less experienced colleagues are unburdened by yesterday’s lessons that may no longer be relevant today, they don’t have the same blind spots. Also, our profession is doomed if we won’t give less experienced colleagues a chance to shine. | | |
| ▲ | tossandthrow an hour ago | parent [-] | | Our profession is likely doomed not because we don't train people, but by the lack of demand |
|
|
|
|
| ▲ | ChiefTinkeer 2 hours ago | parent | prev | next [-] |
| I think this is a very good point. We have a natural bias toward human output as there is an illusion of full control - in reality even just from a solo dev perspective you've still got a load of hidden illogical persuasions that are influencing your code and how you approach a problem. AI has its own biases that come out of the nature its training on large unknowable data sets, but I'd argue the 'black box' thinking that comes out that isn't too different to the black box of the human mind. That's not at all to say that AI isn't worse (even if quicker) than top developer talent today writing handwritten code - just that the barrier to getting that level of quality isn't as insurmountable as it might appear. |
|
| ▲ | Spooky23 an hour ago | parent | prev | next [-] |
| It absolutely is. I did some consulting work for an environment where they have to churn out code to meet certain unchanging schedules, usually you can dumb down the process to make it more deterministic. These guys had to manage very complex calculation engine based on we’ll just let it changes every year had to be correct had to be delivered by a certain date every year. They had an army (100-200 people depending on various factors) of marginally skilled coding drones that were able to turn out the Java, COBOL or whatever it was predictably on that schedule without necessarily understanding any of the big picture or have any having any hope of so. Basically a software factory. There was about a dozen people who actually understood everything. |
|
| ▲ | nkrisc an hour ago | parent | prev | next [-] |
| Only if you consider generative AI and human beings to be effectively equivalent. Being a project manager is more or less something humans have been doing since the dawn of time. Generative AI takes money as input and gives some output. If you don’t like the output, more money goes in. It’s far more akin to gambling than organizing human labor. |
|
| ▲ | QuantumGood 3 hours ago | parent | prev | next [-] |
| Framing anything with a common blanket concept usually fails to apply the same framing to related areas. A lot of things include some gambling, you need to compare how it was also 'gambling' before, and how 'not using AI' is also 'gambling', etc. As @m00x points out "coding is gambling on slot machines, managing developers is betting on race horses." |
|
| ▲ | krupan 2 hours ago | parent | prev | next [-] |
| I ssk an AI to play hangman with me and looked at it's reasoning. It didn't just pick a secret word and play a straightforward game of hangman. It continually adjusted the secret word based on the letters I guessed, providing me the "perfect" game of hangman. Not too many of my guesses were "right" and not too many "wrong" and I after a little struggle and almost losing, I won in the end. It wasn't a real game of hangman, it was flat out manipulation, engagement farming. Do you think it's possible that AI does that in any other situations? |
| |
| ▲ | lcampbell 35 minutes ago | parent [-] | | The reasoning generally isn't kept in the context, so after choosing the secret word in the first reasoning block, the LLM will have completely forgotten it in the second and subsequent requests. So, it technically didn't change the secret word so much as it was trying to infer what its own secret word might have been, based on your guesses. |
|
|
| ▲ | runarberg 3 hours ago | parent | prev | next [-] |
| I don‘t think so. A project manager can give feedback, train their staff, etc. An AI coding model is all you get, and you have to wait until your provider trains a new model before you might see an improvement. |
|
| ▲ | ModernMech 2 hours ago | parent | prev | next [-] |
| That says more about how you see developers than whether or not managers are in a sense gamblers. |
|
| ▲ | ares623 3 hours ago | parent | prev | next [-] |
| This must be it. So many of our colleagues have been burnt by bad coworkers that they would rather burn everything down than spend another day working with them. |
|
| ▲ | rvz 3 hours ago | parent | prev | next [-] |
| > AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results. Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot; making such an entity completely unsuitable for high risk situations. This typical AI booster comparison has got to stop. |
| |
| ▲ | tossandthrow 2 hours ago | parent | next [-] | | Love that you needed to make it clear that it is humans that can explain themselves.. Employees can only be held accountable with severe malice. There is a good chance that the person actually responsible (eg. The ceo or someone delegated to be responsible) will soon prefer to have AIs do the work as their quality can be quantified. | |
| ▲ | thunky 2 hours ago | parent | prev [-] | | > Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot You "own" the software it creates which means you're responsible for it. If you use AI to commit crimes you'll go to jail, not the AI. |
|
|
| ▲ | underlipton 3 hours ago | parent | prev [-] |
| As a human, you generally have the opportunity make decent headway in understanding the other humans that you're working with and adjusting your instructions to better anticipate the outputs that they'll return to you. This is almost impossible with AI because of a combination of several factors: >You are not an AI and do not know how an AI "thinks". >Even if you come to be able to anticipate an AI's output, you will be undermined by the constant and uncontrollable update schedule imposed on you by AI platforms. Humans only make drastic changes like this under uncommon circumstances, like when they're going through large changes in their life, not as a matter of course. >However, without this update schedule, problems that were once intractable will likely stay so forever. Humans, on the other hand, can grow without becoming completely unpredictable. It's a Catch-22. AI is way closer to gambling. |