| ▲ | swiftcoder a day ago |
| > Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works? That is exactly what they are doing, yes |
|
| ▲ | Verdex a day ago | parent | next [-] |
| That's my take as well. I've had my unPRed branches grabbed up and blindly merged by an agent twice now. The guy doing it was shocked both times that his PR had my change sets in it. Also one engineer is treating the code as assembly. I've asked some pointed questions about code in his PR and the response was "yeah, I don't know that's what the agent did". Edit: To everyone freaking out about the second guy. Yeah, I think being unable to answer questions about the code you're PRing is ill advised. But requirement gathering, codebase untangling, and acceptance testing are all nontrivial tasks that surround code gen. I'm a bit surprised that having random change sets slurped up into someone else's rubber stamped PR isnt the thing that people are put off by. |
| |
| ▲ | steveBK123 a day ago | parent | next [-] | | My friend is a CTO at a non-tech company and he's now dealing with code from non-SWEs trying to self serve with LLMs. But it's like a kid running a lemonade stand. Total DIY weekend project quality stuff that they are demanding go live. Hardcoded credentials, no concept of dev/qa/prod environments, no logging, no tests, no source control. I'm not really sure teaching basic SWE practices / SDLC / system design to people whose day job is like.. accounting makes sense compared to just accelerating developer productivity. | | |
| ▲ | bonesss a day ago | parent | next [-] | | It’s the same dilemma as old: it’s easier to teach a doctor UML than a coder Doctoring. But, critically, that’s about making doctor-facing IT systems not performing their skilled jobs. Bringing code does not help, but a validated user story with flow diagrams, a UI suggestion, and a valid ticket could. That’s the bridge to gap. Were I that CTO I’d explain that code carries liability, SWEs can end up in jail for malfeasance, fines, penalties, and lawsuits are what awaits us for eff-ups. “Coders” get fired if their code doesn’t work. Same speech to the devs, do exactly as much unsolicited Accounting as you wanna get fired for. Talk fences, good neighbours. | | |
| ▲ | steveBK123 a day ago | parent [-] | | The ROI on teaching UML to a doctor is pretty low though right? Non-technical people are not writing tickets, they are just slinging slop. Another anecdote of things I've seen - a non technical person setting up some web scraping monstrosity with 200k lines of code. They beat their chest about how they didn't need the IT org. 1 month goes by and of course it breaks as soon as anything on the website changes and now they have a gun to ITs head to "fix it" and take it over. This outcome for a DIY brittle web scraper is obvious to anyone that's ever written code, but shocking to someone who thinks LLMs are magic. |
| |
| ▲ | swader999 a day ago | parent | prev [-] | | No, you should have forward deployed engineers sitting and working right beside these traditional non SW roles if you need to fully integrate AI into their mix. | | |
| ▲ | steveBK123 a day ago | parent [-] | | Right, unfortunately a lot of orgs are quickly letting loose some combination of non-tech self-serve AI coding and tech org staffing reductions rather than ADDING forward deployed engineers. |
|
| |
| ▲ | sikozu a day ago | parent | prev | next [-] | | So he's being paid and is sitting there letting an AI tool do his work for him? Insanity. | | |
| ▲ | robotresearcher a day ago | parent [-] | | We didn’t mind when typesetting was automated. Or when compilers were invented. Why is this different? | | |
| ▲ | calmingsolitude a day ago | parent | next [-] | | Because he's paid to deliver code that works. Letting an AI agent do everything would be fine if it didn't make any mistakes, but that's far from reality. | | |
| ▲ | robotresearcher 7 hours ago | parent [-] | | Compilers and typesetters make mistakes. Fewer as time goes on, but that’s not a categorical difference. |
| |
| ▲ | hliyan a day ago | parent | prev | next [-] | | Do typesetters inexplicably change the meaning of the book or document being typeset? Do compilers alter the behavior intended by the programmer, sometimes in ways that are not immediately obvious? Did the invention of typesetters lead to investments so massive, that the investors had to herald the end of handwriting (no equivalent analogy for compilers)? | | |
| ▲ | robotresearcher 7 hours ago | parent | next [-] | | On compilers, you know they do! Compilers have bugs and some languages have undefined behavior. On typesetters and investment: the WYSIWYG word processor is on almost every home and office desk in the world. | |
| ▲ | dwedge a day ago | parent | prev [-] | | It reminds me of the guy who replaced his static blog deployment scripts with asking chatgpt to generate the html from his text based on a template, and said that he isn't sure that the llm isn't changing his writing but hopes it isn't |
| |
| ▲ | Ekaros a day ago | parent | prev | next [-] | | So I take we can soon replace coders entirely. Just fire all of them. And let some intern under VP prompt the whole thing? | |
| ▲ | vga1 a day ago | parent | prev | next [-] | | Resistance to technological change has been a thing since farming was invented. Socrates thought that writing will ruin everyone's memory, and that people who just rely on written word will appear knowledgeable while actually knowing nothing. The only difference is that this is happening to us. | |
| ▲ | mrhottakes a day ago | parent | prev [-] | | Do typesetters or compilers write the code for you? Or are you perhaps using a disingenuous analogy? | | |
| ▲ | robotresearcher 7 hours ago | parent [-] | | A compiler writes the ASM code for you, and the typesetter does the layout for you, yes absolutely. The high level language code is a prompt for the compiler. Consider that there is parsable C code whose behavior is not even defined. There are still bugs in compilers today, where the code produced is not what you intended. And further, modern compilers do lots of work to optimize performance. You usually don’t even look at the resulting code, you just gratefully accept the rewrite for the extra oomph. |
|
|
| |
| ▲ | esafak a day ago | parent | prev [-] | | To that last guy, as the manager I would say "What is it that you do here??" | | |
| ▲ | npongratz a day ago | parent | next [-] | | That's just a straight-shooter with "upper management" written all over him. | |
| ▲ | throwup238 a day ago | parent | prev | next [-] | | He signs the TPS reports. | |
| ▲ | misterboo72 a day ago | parent | prev | next [-] | | I then just basically space out for a while. | |
| ▲ | Mistletoe a day ago | parent | prev [-] | | “I’m the prompter.” | | |
| ▲ | esafak a day ago | parent | next [-] | | I take the prompts to the AI so the manager doesn't have to! I have prompting skills!! I just can't make the joke work. There really are people that think they can get paid to press the agent's on button. How long before their checks stop clearing and it "just works itself out naturally"? | | |
| ▲ | storus a day ago | parent | next [-] | | That's literally how some Meta AI jobs looked a few years back - set up a few parameters, push a button, wait until training and evals are finished; repeat if. needed. $500k+/year. | |
| ▲ | fragmede a day ago | parent | prev | next [-] | | What color is your stapler? | |
| ▲ | xienze a day ago | parent | prev [-] | | > I take the prompts to the AI so the manager doesn't have to! I have prompting skills!! This is honestly the mindset of the people on here who proudly proclaim that they haven't written a line of code in six months and are excited about what programming is "evolving" into. Naturally, _their_ AI skills aren't something that an "idea guy" can use to build a product without looping in a developer, so _his_ job is safe and will never go away -- "I understand system design, an LLM will never be able to do that!" Sure thing buddy. | | |
| |
| ▲ | weirdmantis69 a day ago | parent | prev [-] | | "I write the prompts" |
|
|
|
|
| ▲ | entropicdrifter a day ago | parent | prev | next [-] |
| It's bizarre to me that people being paid to use their brains with a job title including the word "engineer", which essentially means "clever thought thinker" in Latin, just offloading all of their thinking to a bot instead of just using it as a way to ensure clean execution and faster understanding of the structures of underdocumented projects. |
| |
| ▲ | SpicyLemonZest a day ago | parent [-] | | There's some people who are offloading all of their thinking to a bot, and I agree with you that I don't really understand this. But the good version of it is to offload some of your thinking to a bot so you can focus your own thinking on the parts that matter. My time is much better spent on "ah there is a scalability tradeoff here" than "I guess I have to initialize the FooBarProviderServiceProvider in a different spot so that I can pass a mock to the FooBarProvisionConsumer unit tests". |
|
|
| ▲ | ravenstine a day ago | parent | prev | next [-] |
| And why wouldn't they? Companies are quite literally instructing them to do so. I work at such a company and have heard similar anecdotes from colleagues that work at other companies. |
| |
| ▲ | solenoid0937 a day ago | parent [-] | | Why wouldn't you do this even if not instructed to do so? I can do so much more with my spare time now. I throw agents at problems and get way more done. $1k in tokens every day is easy to hit. | | |
| ▲ | mkehrt a day ago | parent [-] | | What exactly are you “getting done”? I’m really curious what you’re doing with so many tokens. |
|
|
|
| ▲ | fnordpiglet a day ago | parent | prev | next [-] |
| To be fair, taking an average SWE at $160k/y, and spending $1k/m, and offloading mechanical ticket work from their working set sounds like a bargain to me. They could be spending the time on design and planning and working on new things, figuring out how to save costs in optimizations. In fact for every soul sucking mechanical task you offload, the better of you are overall. It’s not like AI is the first time this happened. CI/CD and extensive preflight and integration and canary testing is also a way of saving engineer time and improving throughput at the cost of latency and compute resources. This is just moving up the semantic stack. Obviously as engineers we say “awesome more features and products!” but management says “awesome fewer engineers!” either way pasting the ticket in and letting a machine do the work for a fraction of the cost was the right choice. There’s no John Henry award. |
| |
| ▲ | swiftcoder a day ago | parent | next [-] | | > pasting the ticket in and letting a machine do the work for a fraction of the cost was the right choice If it were producing equivalent outcomes, sure. So far I haven't personally seeing strong evidence for that. LLMs do write code pretty competently at this point, but actually solving the correct problem, and without introducing unintended consequences, is a different matter entirely | | |
| ▲ | entropicdrifter a day ago | parent [-] | | This. LLMs are terrible at planning/architecture and maintaining clarity of vision across a project. There are lots of tools that mitigate these issues but they're going to keep coming up regardless because of the fundamental nature of LLMs. If you're not doing the design of the solutions for problems as an engineer or at least making the decisions and owning the maintenance of that architecture/design, what even is your job at that point? | | |
| ▲ | anal_reactor 21 hours ago | parent [-] | | > LLMs are terrible at planning/architecture and maintaining clarity of vision across a project. So are many corporations but that doesn't stop them from being economically successful. |
|
| |
| ▲ | Aurornis a day ago | parent | prev [-] | | > and offloading mechanical ticket work from their working set sounds like a bargain to me Unfortunately the people who offload the work of understanding and interacting with tickets just end up offloading the consequences to everyone else who has to do extra work to make sure their LLM understands the task, review the work to make sure they built the right thing, and on and on. The same thing happens when people start sending AI bots to attend meetings: The person freed up their own time, but now everyone else has to work hard to make sure their AI bot gets the right message to them and follow up to make sure what was supposed to happen in the meeting gets to them. | | |
| ▲ | fnordpiglet a day ago | parent | next [-] | | Managers have processes for correcting for these behaviors and they fall into the second bucket of outcomes I mentioned. | |
| ▲ | AnimalMuppet a day ago | parent | prev [-] | | If someone sends a bot to a meeting, warn them the first time. Fire them the second, for exactly the reason that you said in your last paragraph: They're pushing their work onto other people. |
|
|
|
| ▲ | sorry_outta_gas a day ago | parent | prev [-] |
| [dead] |