| ▲ | Quothling 4 hours ago | ||||||||||||||||
I think AI will fail in any organisation where the business process problems are sometimes discuvered during engineering. I use AI quite a lot, I recently had Claude upgrade one of our old services from hubspot api v1 to v3 without basically any human interaction beyond the code review. I had to ask it for two changes I think, but over all I barely got out of my regular work to get it done. I did know exactly what to ask of it because the IT business partners who had discovered the flaw had basically written the tasks already. Anyway. AI worked well there. Where AI fails us is when we build new software to improve the business related to solar energy production and sale. It fails us because the tasks are never really well defined. Or even if they are, sometimes developers or engineers come up with a better way to do the business process than what was planned for. AI can write the code, but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to do X first. If we only did code-reviews then we would miss that step. In a perfect organisation your BPM people would do this. In the world I live in there are virtually no BPM people, and those who know the processes are too busy to really deal with improving them. Hell... sometimes their processes are changed and they don't realize until their results are measurably better than they used to be. So I think it depends a lot on the situation. If you've got people breaking up processes, improving them and then decribing each little bit in decent detail. Then I think AI will work fine, otherwise it's probably not the best place to go full vibe. | |||||||||||||||||
| ▲ | bonesss 3 hours ago | parent | next [-] | ||||||||||||||||
> AI can write the code, but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to… LLMs combine two dangerous traits simultaneously: they are non-critical about suboptimal approaches and they assist unquestioningly. In practice that means doing dumb things a lazy human would refuse because they know better, and then following those rabbit holes until they run out of imaginary dirt. My estimation is that that combination undermines their productivity potential without very structured application. Considering the excess and escalating costs of dealing with issues as they arise further from the developers work station (by factors of approximately 20x, 50x, and 200x+ as you get out through QA and into customer environments (IIRC)), you don’t need many screw ups to make the effort net negative. | |||||||||||||||||
| ▲ | ivell 3 hours ago | parent | prev | next [-] | ||||||||||||||||
One benefit of AI could be to build quick prototypes to discover what processes are needed for users to try out different approaches before committing to a full high quality project. | |||||||||||||||||
| ▲ | Onavo 4 hours ago | parent | prev | next [-] | ||||||||||||||||
> business process problems are sometimes discovered (sic.) during engineering This deserves a blog post all on its own. OP you should write one and submit it. It's a good counterweight to all the AI optimistic/pessimistic extremism. | |||||||||||||||||
| ▲ | viraptor 3 hours ago | parent | prev [-] | ||||||||||||||||
> but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to do X first Then don't ask it to write code? If you ask any recent high quality model to discuss options, tradeoffs, design constraints, refine specs it will do it for you until you're sick and tired of it finding real edge cases and alternatives. Ask for just code and you'll get just code. | |||||||||||||||||
| |||||||||||||||||