| ▲ | llmslave2 15 hours ago | |||||||||||||||||||||||||
I think the reason for the varying claims and predictions is because developers have wildly different standards for what constitutes working code. For the developers with a lower threshold, AI is like crack to them because gen ai's output is similar to what they would produce, and it really is a 10x speedup. For others, especially those who have to fix and maintain that code, it's more like a 10x slowdown. Hence why you have in the same thread, some developer who claims that Claude writes 99% of their code and another developer who finds it totally useless. And of course others who are somewhere in the middle. | ||||||||||||||||||||||||||
| ▲ | throw1235435 14 hours ago | parent | next [-] | |||||||||||||||||||||||||
There's also the effect of different models. Until the most recent models, especially for concise algorithms, I felt it was still easier to sometimes do it myself (i.e. a good algo can be concise/more concise than a lossy prompt) and leave the "expansion/repetitive" boilerplate code to the LLM. At least for me the latest models do feel like a "step change" in that the problems can be bigger and/or require less supervision on each problem depending on the tradeoff you want. | ||||||||||||||||||||||||||
| ▲ | remich 14 hours ago | parent | prev [-] | |||||||||||||||||||||||||
Have you considered that it's a bit dismissive to assume that developers who find use out of AI tools necessarily approve of worse code than you do, or have lower standards? It's fine to be a skeptic. Or to have tried out these tools and found that they do not work well for your particular use case at this moment in time. But you shouldn't assume that people who do get value out of them are not as good at the job as you are, or are dumber than you are, or slower than you are. That's just not a good practice and is also rude. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||