| ▲ | flatline 4 hours ago | |||||||||||||||||||||||||
Humans are just better at communicating about their process. They will spend hours talking over architectural decisions, implementation issues, writing technical details in commit messages and issue notes, and in this way they not only debug their decisions but socialize knowledge of both the code and the reasons it came to be that way. Communication and collaboration are the real adaptive skills of our species. To the extent AI can aid in those, it will be useful. To the extent it goes off and does everything in a silo, it will ultimately be ignored - much like many developers who attempt this. I do think the primary strengths of genai are more in comprehension and troubleshooting than generating code - so far. These activities play into the collaboration and communication narrative. I would not trust an AI to clean up cruft or refactor a codebase unsupervised. Even if it did an excellent job, who would really know? | ||||||||||||||||||||||||||
| ▲ | crazygringo 3 hours ago | parent [-] | |||||||||||||||||||||||||
> Humans are just better at communicating about their process. I wish that were true. In my experience, most of the time they're not doing the things you talk about -- major architectural decisions don't get documented anywhere, commit messages give no "why", and the people who the knowledge got socialized to in unrecorded conversations then left the company. If anything, LLM's seem to be far more consistent in documenting the rationales for design decisions, leaving clear comments in code and commit messages, etc. if you ask them to. Unfortunately, humans generally are not better at communicating about their process, in my experience. Most engineers I know enjoy writing code, and hate documenting what they're doing. Git and issue-tracking have helped somewhat, but it's still very often about the "what" and not the "why this way". | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||