| ▲ | ryguz 2 hours ago | ||||||||||||||||||||||
The interesting thing about Rule 1 is that it makes Rules 3-5 follow almost mechanically. If you genuinely accept that you cannot predict where the bottleneck is, then writing straightforward code and measuring becomes the only rational strategy. The problem is most people treat these rules as independent guidelines rather than as consequences of a single premise. In practice what I see fail most often is not premature optimization but premature abstraction. People build elaborate indirection layers for flexibility they never need, and those layers impose real costs on every future reader of the code. The irony is that abstraction is supposed to manage complexity but prematurely applied it just creates a different kind. | |||||||||||||||||||||||
| ▲ | silisili an hour ago | parent | next [-] | ||||||||||||||||||||||
> In practice what I see fail most often is not premature optimization but premature abstraction This matches my experience as well. Someone here commented once that abstractions should be emergent, not speculative, and I loved that line so much I use it with my team all the time now when I see the craziness starting. | |||||||||||||||||||||||
| ▲ | eru 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> In practice what I see fail most often is not premature optimization but premature abstraction. Compare and contrast https://people.mpi-sws.org/~dreyer/tor/papers/wadler.pdf | |||||||||||||||||||||||
| ▲ | fl0ki an hour ago | parent | prev | next [-] | ||||||||||||||||||||||
I only agree if you have a bounded dataset size that you know will never grow. If it can grow in future (and if you're not sure, you should assume it can), not only will many data structures and algorithms scale poorly along the way, but they will grow to dominate the bottleneck as well. By the time it no longer meets requirements and you get a trouble ticket, you're now under time pressure to develop, qualify, and deploy a new solution. You're much more likely to encounter regressions when doing this under time pressure. If you've been monitoring properly, you buy yourself time before it becomes a problem as such, but in my experience most developers who don't anticipate load scaling also don't monitor properly. I've seen a "senior software engineer with 20 years of industry experience" put code into production that ended up needing 30 minute timeouts for a HTTP response only 2 years after initial deployment. That is not a typo, 30 minutes. I had to take over and rewrite their "simple" code to stop the VP-level escalations our org received because of this engineering philosophy. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | mkehrt an hour ago | parent | prev | next [-] | ||||||||||||||||||||||
This comment is fascinating to me, as it indicates an entirely different mindset than mine. I'm much more interested in code readability and maintainabilty (and simplicty and elegance) than performance, unless it's necessary. So I would start by saying everything flows from rule 4 or maybe 5. Rule 1 is a consequence of rule 4 for me. | |||||||||||||||||||||||
| ▲ | munk-a an hour ago | parent | prev | next [-] | ||||||||||||||||||||||
As someone who believes strongly in type based programming and the importance of good data structure choice I'm not seeing how Rule 5 follows Rule 1. I think it's important to reinforce how impactful good data structure choice is compared to trying to solve everything through procedural logic since a well structured coordination of data interactions can end up greatly simplifying the amount of standalone logic. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | rob 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Really need that [flag bot] button added to HN. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | tracker1 2 hours ago | parent | prev [-] | ||||||||||||||||||||||
You tend to see it a lot in "Enterprise" software (.Net and Java shops in particular). A lot of Enterprise Software Architects will reach for their favored abstractions out of practice as opposed to if they fit. Custom solution providers will build a bunch of the same out of practice. This is something I tend to consider far, far worse than "AI Slop" in practice. I always hated Microsoft Enterprise Library's Data Access Application Block (DAAB) in practice. I've literally only ever seen one product that supported multiple database backends that necessitated that level of abstraction... but I've seen that library well over a dozen times in practice. Just as a specific example. IMO, abstractions should generally serve to make the rest of the codebase reasonable more often than not... abstractions that hide complexity are useful... abstractions that add complexity much less so. | |||||||||||||||||||||||