| ▲ | colechristensen 18 hours ago | |||||||
Yes, but not like what you think. Programmers are going to look more like product managers with extra technical context. AI is also great at looking for its own quality problems. Yesterday on an entirely LLM generated codebase Prompt: > SEARCH FOR ANTIPATTERNS Found 17 antipatterns across the codebase: And then what followed was a detailed list, about a third of them I thought were pretty important, a third of them were arguably issues or not, and the rest were either not important or effectively "this project isn't fully functional" As an engineer, I didn't have to find code errors or fix code errors, I had to pick which errors were important and then give instructions to have them fixed. | ||||||||
| ▲ | mjr00 18 hours ago | parent | next [-] | |||||||
> Programmers are going to look more like product managers with extra technical context. The limit of product manager as "extra technical context" approaches infinity is programmer. Because the best, most specific way to specify extra technical context is just plain old code. | ||||||||
| ||||||||
| ▲ | manmal 16 hours ago | parent | prev [-] | |||||||
Yeah, don‘t rely on the LLM finding all the issues. Complex code like Swift concurrency tooling is just riddled with issues. I usually need to increase to 100% line coverage and then let it loop on hanging tests until everything _seems_ to work. (It’s been said that Swift concurrency is too hard for humans as well though) | ||||||||
| ||||||||