| ▲ | ryandrake 3 hours ago | |||||||
Using AI to go faster is optimizing the wrong thing. At every place I've worked, the "code writing" part takes the least amount of time, compared to all the other things you need to do in order to implement a feature. Let's examine a feature that takes a day to code: First, you've got to plan everything, using whatever Agile or Waterfall planning ritual your company uses, get the task breakdown, file the JIRA tickets, decide who's doing the work. That all can take days or even weeks. Then you need to write a design doc with your proposed design, and get that reviewed by your peers/teammates. Again, another week for any substantial feature. If there are multiple teams involved, you need to get buy-in and design agreement among those multiple teams, let's add another week. At some places, you need approval to commence work, which can take multiple days, depending on the approver's schedule and availability. Then, you take a day and write the code and make sure it passes tests. Then, it's code review time, and this can involve a lot of back and forth with your team, resulting in multiple iterations and additional code reviews. Another "days or weeks" stretch. At bigger companies, you're going to need to pass all sorts of reviews from other departments, like legal, privacy, performance, accessibility, QA... even if done in parallel, let's add a conservative 2 weeks. Finally, you push to staging, and need to get some soak time internally among dogfooders, so you have some confidence that it's working. +1 week. Then you're ready to push from staging to prod, but since you work at a serious company, nothing goes to 100% prod right away--you need to slowly ramp up and check feedback/metrics in case you need to roll back. The ramp to fully launched could take another two weeks. So here's a feature that took, what, maybe two months from design to release, and we're falling all over ourselves to optimize the part that took a day so that it takes 5 minutes instead... | ||||||||
| ▲ | AdieuToLogic 2 hours ago | parent | next [-] | |||||||
> Using AI to go faster is optimizing the wrong thing. At every place I've worked, the "code writing" part takes the least amount of time, compared to all the other things you need to do in order to implement a feature. This reminds me of one of my software engineering axioms:
> So here's a feature that took, what, maybe two months from design to release, and we're falling all over ourselves to optimize the part that took a day so that it takes 5 minutes instead...Well said. | ||||||||
| ▲ | dilyevsky 2 hours ago | parent | prev | next [-] | |||||||
1. models are now extremely good at totally automating tedious tasks such as updating dependancies, build/deploys scripts, unit tests, etc what used to take days now can takes minutes. Easily 50x speedup on this. This was non-trivial part of every engineer's day-to-day at an established company. "platform engineering" or whatever they call this now is dead. 2. technically risky ideas that you never would have tried because it didn't make sense from risk+effort/reward standpoint are now within reach. it isn't "go faster" per se but the speed at which you can try something out still changes the nature of engineering process. | ||||||||
| ||||||||
| ▲ | ajam1507 3 hours ago | parent | prev | next [-] | |||||||
It very much depends on what kind of company you work for. You could never run a startup like this, for example. | ||||||||
| ▲ | ex-aws-dude 3 hours ago | parent | prev | next [-] | |||||||
Not every company works like that Big tech has a lot of wankery like that but smaller companies can be fast and scrappy | ||||||||
| ||||||||
| ▲ | CodeShmode 2 hours ago | parent | prev | next [-] | |||||||
Or you can have a conversation with an agent to build up a requirements/plan spec, asking it to analyse existing code patterns. When it seems like the agent has a good understanding of what needs to be done and how. Ask it to implement, keeping changes as a local spike. Ask the agent questions about all the other teams' code, reaching out to them for questions it can't answer or clarification. With agent capabilities atm this is rare or can be done fairly async: "please confirm these things". Maybe realise your code architecture is completely wrong. Manually code up some new abstractions that fit better, write the learnings into the spec plan. Strip out any implementation that largely doesn't fit your updated abstractions. Ask the agent to migrate the code to the new structure. Repeat until spike is operational and you're happy with the abstractions used Chat with the agent to create a Design Doc for the approach in the spike. Create a single JIRA ticket for "Productionise CodeShmode's spike". Get reviews and feedback from stakeholders. Integrate feedback into your spike, or even the original spec document and regenerate the whole thing. So much of the ritual you've outlined here is overhead from working in a large org where roles are siloed. When one person is empowered to do more then the actual work per person goes down and the overhead becomes the dominant. But that overhead isn't needed anymore because one person can now do many people's work. I've whipped up spikes in a few days that would've been a month of work across a team multiple DDs and approvals. In the past this wasn't feasible so we would need to justify what those people would work on. Now you can whip it up, show a working demo and ask "should we productionise this" | ||||||||
| ||||||||
| ▲ | gwerbin 2 hours ago | parent | prev | next [-] | |||||||
I think it depends a lot on how automated the agent is and how long you let it run for. Full automation where you try to build an entire piece of software with agents... yeah, no, we are not there yet. At least not a few you care about maintainability. Short-lived tightly-scoped agents can do alarmingly thorough and high-quality knowledge work, as long as the work itself is relatively mechanical and can either be carried out in independent chunks or sequentially. For example, a research agent like the Gemini "deep research" tool can save hours of digging around the web and compiling information. With careful prompting, sufficient background context, and good self-evaluation tools, an agentic loop can do very detailed data analysis, carry out serious statistics and machine learning projects, produce high-quality data visualization thereof, and put together a handy executive summary. They occasionally hallucinate, go off track, get confused, and make mistakes. But they "know" everything that's been published in English for the last 200 years, they never get tired, and the code they write is good enough for throwaway scripting. The real power of agents being able to write code is that they can be extremely self-sufficient and flexible in carrying out these kinds of tree- and sequence-structured knowledge work tasks. That's of course a different thing from "designing good software", which is neither tree-structured or sequential, and requires a level of intelligence (for lack of a better term) that LLMs do not seem to be capable of, at least not yet. But that's a more specific thing than just writing code in order to get stuff done that happens to require code. | ||||||||
| ▲ | threethirtytwo 2 hours ago | parent | prev [-] | |||||||
>Using AI to go faster is optimizing the wrong thing. At every place I've worked, the "code writing" part takes the least amount of time, compared to all the other things you need to do in order to implement a feature. Let's examine a feature that takes a day to code: Ai writes the plans now. I just review and modify. | ||||||||