▲ | forty 4 days ago | |||||||||||||||||||
I assume they have an opinion on the topic, but it doesn't mean they are right (or wrong). Think of driving a car. If the shortest path (in term to time of travel) is through traffic jam, and there is a longer path where you can drive must faster, it's very likely that most people will have the feeling to be more efficient with the longer path. Also the slow down of using LLM might be more subtle and harder to measure. They might happen at code review time, handling more bugs and incident, harder maintainance, recovering your deleted DB ;)... | ||||||||||||||||||||
▲ | epolanski 4 days ago | parent | next [-] | |||||||||||||||||||
Apologies, but from antirez[1] to many other brilliant 1000x developers advocate for LLMs speeding up the process. I can see the impact on my own input both in quantity and quality (LLMs can come up with ideas I would not come up to, and are very useful for tinkering and quickly testing different solutions). As any tool it is up to the user to make the best out of it and understand the limits. At this point it is clear that naysayers: 1) either don't understand our job 2) or haven't given AI tools the proper stress testing in different conditions 3) or are luddites being defensive about the "old" world | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | eichin 4 days ago | parent | prev [-] | |||||||||||||||||||
We've known for decades that self-reported time perception in computer interactions is drastically off (Jef Raskin, The Humane Interface in particular) so unless they have some specifically designed external observations, they are more likely to be wrong. (There have been more recent studies - discussed here on HN - about perception wrt chat interfaces for code specifically - that confirm the effect on modern tools.) |