| ▲ | perching_aix 6 hours ago | ||||||||||||||||||||||||||||||||||||||||
This made me wonder, why aren't there usually teams whose job is to keep an eye on the coding patterns used in the various codebases? Similarly like how you have an SOC team who keeps monitoring traffic patterns, or an Operations Support team who keeps monitoring health probes, KPIs, and logs, or a QA who keeps writing tests against new code, maybe there would be value to keeping track of what coding patterns develop into over the course of the lifetime of codebases? Like whenever I read posts like this, they're always fairly anecdotal. Sometimes there will even be posts about how large refactor x unlocked new capability y. But the rationale always reads somewhat retconned (or again, anecdotal*). It seems to me that maybe such continuous meta-analysis of one's own codebases would have great potential utility? I'd imagine automated code smell checking tools can only cover so much at least. * I hammer on about anecdotes, but I do recognize that sentiment matters. For example, if you're planning work, if something just sounds like a lot of work, that's already going to be impactful, even if that judgement is incorrect (since that misjudgment may never come to light). | |||||||||||||||||||||||||||||||||||||||||
| ▲ | vlovich123 5 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||
There are. All the big tech companies have them. It’s just difficult to accomplish when you have millions of lines of code. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||