| ▲ | ivansmf 4 hours ago | |
The article severely underestimates deployment times for large, world wide services. Usually the strategy is to have a smaller "blast radius" for deployments and going in stages that are also usually time bound ("let it bake"). It also does not account for outages and fixing things you only find in deployment. Programming languages like Python it using injection in Java (e.g. using Guice) either need pristine testing, and all test teams were converted to dev 20 years ago, or have a magical way to destroy all the help compilers and static analysis can give you. So yeah, you take the 4 weeks of development from your 6 month deployment, then add 6 weeks of debugging and retries by using AI. You're welcome that will be 3 million tokens, of which you wrote 1k, the rest was system prompts and "reasoning", which you do not control. This whole AI space is highly fixable, but requires investment no one seems to be willing to do, particularly in areas that were mistakes from the past. | ||