| ▲ | seer 5 hours ago | |
Honestly, this seems very much like the jump from being an individual contributor to being an engineering manager. The time it happened for me was rather abrupt, with no training in between, and the feeling was eerily similar. You know _exactly_ why the best solution is, you talk to your reports, but they have minds of their own, as well as egos, and they do things … their own way. At some point I stopped obsessing with details and was just giving guidance and direction only in the cases where it really mattered, or when asked, but let people make their own mistakes. Now LLMs don’t really learn on their own or anything, but the feeling of “letting go of small trivial things” is sorta similar. You concentrate on the bigger picture, and if it chose to do an iterative for loop instead of using a functional approach the way you like it … well the tests still pass, don’t they. | ||
| ▲ | Ronsenshi 40 minutes ago | parent [-] | |
The only issue is that as an engineering manager you reasonably expect that the team learns new things, improve their skills, in general grow as engineers. With AI and its context handling you're working with a team where each member has severe brain damage that affects their ability to form long term memories. You can rewire their brain to a degree teaching them new "skills" or giving them new tools, but they still don't actually learn from their mistakes or their experiences. | ||