Remix.run Logo
GuB-42 6 hours ago

> Premature optimization is the root of all evil.

There are few principle of software engineering that I hate more than this one, though SOLID is close.

It is important to understand that it is from a 1974 paper, computing was very different back then, and so was the idea of optimization. Back then, optimizing meant writing assembly code and counting cycles. It is still done today in very specific applications, but today, performance is mostly about architectural choices, and it has to be given consideration right from the start. In 1974, these architectural choices weren't choices, the hardware didn't let you do it differently.

Focusing on the "critical 3%" (which imply profiling) is still good advice, but it will mostly help you fix "performance bugs", like an accidentally quadratic algorithms, stuff that is done in loop but doesn't need to be, etc... But once you have dealt with this problem, that's when you notice that you spend 90% of the time in abstractions and it is too late to change it now, so you add caching, parallelism, etc... making your code more complicated and still slower than if you thought about performance at the start.

Today, late optimization is just as bad as premature optimization, if not more so.

austin-cheney 5 hours ago | parent | next [-]

The most misunderstood statement in all of programming by a wide margin.

I really encourage people to read the Donald Knuth essay that features this sentiment. Pro tip: You can skip to the very end of the article to get to this sentiment without losing context.

Here ya go: https://dl.acm.org/doi/10.1145/356635.356640

Basically, don't spend unnecessary effort increasing performance in an unmeasured way before its necessary, except for those 10% of situations where you know in advance that crucial performance is absolutely necessary. That is the sentiment. I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.

iamflimflam1 5 hours ago | parent | next [-]

> I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.

Similar to the "code should be self documenting - ergo: We don't write any comments, ever"

f1shy 4 hours ago | parent | next [-]

It is to me incredible, how many „developers“, even “10 years senior developers” have no idea how to use a dubugger and or profiler. I’ve even met some that asked “what is a profiler?” I hope I’m not insulting anybody, but to me is like going to an “experienced mechanic” and they don’t know what a screwdriver is.

afpx 3 hours ago | parent | next [-]

The last time I interviewed (around 10 years ago) I was surprised when 9 of the 10 senior developers didn't know how many bits were in basic elemetary types.

(Then, shortly afterward I also tried to find a new job, realized the entire industry had changed, and was fortunate enough to decide it wasn't worth the trouble.)

WalterBright 3 hours ago | parent | next [-]

> 9 of the 10 senior developers didn't know how many bits were in basic elemetary types

That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.

The net result of that is I never use C "long", instead using "int" and "long long".

This mess is why D has 32 bit ints and 64 bit longs, whether it's a 32 bit machine or a 64 bit machine. The result was we haven't had porting problems with integer sizes.

switchbak 3 hours ago | parent | next [-]

It's substantially worse on the JVM. One's intuition from C just fails when you have to think about references vs primitives, and the overhead of those (with or without compressed OOPs).

I've met very few folks who understand the overheads involved, and how extreme the benefits can be from avoiding those.

Quarrelsome 2 hours ago | parent [-]

Conversely I've met many folks who come into managed environments and piss away time trying to wrangle the managed system into how they think it should work, instead of accepting that clever people wrote it and guidelines when followed result in acceptable outcomes.

The sort of insane stuff I've seen on the dotnet repo where people are trying to tear apart the entire type system just because they think they've cracked some secret performance code.

afpx 3 hours ago | parent | prev [-]

That's a reasonable answer. But, I meant they seemed to have little understanding or interest. I don't interview much, and I'm probably a poor interviewer. But, I guess I was expecting some discussion.

WalterBright 3 hours ago | parent [-]

I ran into some comp sci graduates in the early 80's who did not know what a "register" was.

To be fair, though, I come up short on a lot of things comp sci graduates know.

It's why Andrei Alexandrescu and I made a good team. I was the engineer, and he the scientist. The yin and the yang, so to speak.

SAI_Peregrinus 3 hours ago | parent | prev | next [-]

How many bits are in an `int` in C? What do you mean "at least 16", that's ridiculous, nobody would write a language that leaves the number of bits in basic elementary types partially specified‽

bluGill 2 hours ago | parent [-]

It is a good idea - most of the time you don't care, and on slower systems a large int is harmful since the system can't handle that much and it cost performance - go to the faster system with larger ints when you need larger intw.

ekidd 3 hours ago | parent | prev | next [-]

I mean, as a senior developer, the number of bits in an "int" is "who the hell knows, because it has changed a bunch of times during my career, and that's what stdint.h is for." And let's not even talk about machines with 32-bit "char" types, which I actually had to program for once.

If the number of bits isn't actually included right in the type name, then be very sure you know what you're doing.

The senior engineer answer to "How many bits are there in an int?" is "No, stop, put that down before you put your eye out!" Which, to be fair, is the senior engineer answer to a lot of things.

estimator7292 2 hours ago | parent | prev [-]

On the one hand, in today's world asking how many bits is in an int is exactly as answerable as "how long is a piece of rope"

On the other, the right answer is 16 or 32. It's not the correct answer, strictly speaking, but it is the right one.

jandrewrogers 2 hours ago | parent | next [-]

An 'int' is also 64 bits on some platforms.

fragmede an hour ago | parent | prev [-]

It's the wrong question. How many bits is uint64 is a much better question, if we're at a place where that's relevant.

didgetmaster an hour ago | parent | prev [-]

I once interviewed at Microsoft. The hiring manager asked me how I would go about programming a break point if I were writing a debugger. I started to explain how I would have to swap out an instruction to put an INT 3 in the code and then replace it when the breakpoint would hit.

He stopped me an said he was just looking to see if I knew what an INT 3 was. He said few engineers he interviewed had any idea.

alexjplant 2 hours ago | parent | prev | next [-]

> "code should be self documenting

It should be to the greatest extent possible. Strive to write literate code before writing a comment. Comments should be how and why, not what.

> - ergo: We don't write any comments, ever"

Indeed this does not logically follow. Writing fluent, idiomatic code with real names for symbols and obvious control flow beats writing brain teasers riddled with comments that are necessary because of the difficulty in parsing a 15-line statement with triply-nested closures and single-letter variable names. There's a wide middle ground where comments are leveraged, not made out of necessity.

Sharlin an hour ago | parent [-]

You misunderstood the GP - they were criticizing the way some programmers use "code should be self-documenting" as an excuse when they actually mean "I’m too lazy to write comments even when I really should". Just like "premature optimization is bad" may in fact mean something like "I never bothered to learn how to measure and reason about performance"

alexjplant an hour ago | parent | next [-]

Updated my comment to refine my rhetorical intent. Thank you for the call-out.

wombatpm an hour ago | parent | prev [-]

At a minimum they should comment their GOTO’s

p0nce 4 hours ago | parent | prev | next [-]

Laziness in moral clothing.

msla 5 hours ago | parent | prev [-]

> Similar to the "code should be self documenting - ergo: We don't write any comments, ever"

My counterpoint: Code can be self-documenting, reality isn't. You can have a perfectly clear method that does something nobody will ever understand unless you have plenty of documentation about why that specific thing needs to be done, and why it can't be simpler. Like having special-casing for DST in Arizona, which no other state seems to need:

https://en.wikipedia.org/wiki/Time_in_the_United_States

pc86 3 hours ago | parent | next [-]

This isn't a counterpoint, it's just additional (and barely relevant) information.

msla 3 hours ago | parent [-]

It's a counterpoint to the maxim, not the post I'm replying to.

switchbak 3 hours ago | parent | prev [-]

Documenting it in a way that ensures it satisfies the example case would be preferred. You know, like with a test.

rkaregaran 4 hours ago | parent | prev | next [-]

(this is the correct answer, parent needs to understand this better)

sandeepkd 3 hours ago | parent | prev | next [-]

This is crucial detail that almost everyone misses when they are skimming the topic on surface. The implication is that this statement/law is referenced more often to shut down the architecture designs/discussions

dimitrios1 3 hours ago | parent | prev | next [-]

Even moreso . I like the Rob Pike restatement of this principle, it really makes it crystal clear:

"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is."

Moreso, in my personal experience, I've seen a few speed hacks cause incorrect behavior on more than one occasion.

Sammi 5 hours ago | parent | prev | next [-]

In particular I've seen way too many people use this term as an excuse to write obviously poor performing code. That's not what Knuth said. He never said it's ok to write obviously bad code.

I'm still salty about that time a colleague suggested adding a 500 kb general purpose js library to a webapp that was already taking 12 seconds on initial load, in order to fix a tiny corner case, when we could have written our own micro utility in 20 lines. I had to spend so much time advocating to management for my choice to spend time writing that utility myself, because of that kind of garbage opinion that is way too acceptable in our industry today. The insufferable bastard kept saying I had to do measurements in order to make sure I wasn't prematurely optimizing. Guy adding 500 kb of js when you need 1 kb of it is obviously a horrible idea, especially when you're already way over the performance budget. Asshat. I'm still salty he got so much airtime for that shitty opinion of his and that I had to spend so much energy defending myself.

jcgrillo 4 hours ago | parent | next [-]

Reminds me of a codebase that was littered with SQL injection opportunities because doing it right would have been "premature optimization" since it was "just" a research spike and not customer facing. Guess what happened when it got promoted to a customer facing product?

Shorel 2 hours ago | parent | next [-]

Now that's an stupid argument. I'm with you. Removing SQL injection has little if anything to do with performance, so it is not an optimization. I guess we will get more of this with the vibe coding craze.

fragmede an hour ago | parent [-]

We'll see. It's easy enough to ask Claude to red team and attack the system given the codebase and see what holes it finds to patch up. It's good enough now to find blatantly obvious shit like an SQL injection.

Quarrelsome 2 hours ago | parent | prev [-]

tbf that's not their fault, as long as they were open about the flaws. Business should not have promoted it to a customer facing product. That's just org failure.

jcgrillo 2 hours ago | parent [-]

I disagree. If you merge code to main you immediately lose all control over how it will be used later. You shouldn't ever ship something you're not comfortable with, or unprepared to stake your professional reputation on. To do so is profoundly unethical. In a functioning engineering culture individuals who behave that way would be personally legally liable for that decision. Real professions--doctors, engineers, etc.--have a coherent concept of malpractice, and the legal teeth to back it up. We need that for software too, if we're actually engineers.

Quarrelsome an hour ago | parent [-]

Profoundly unethical? Ok so wtf is this formatting in your comment. You DARE comment, online where people can see, where you start a new sentence with two dashes "--". What are you thinking? Where's the professionalism? Imagine someone took that sentence and put it on the front of the biggest magazine in the world. You'd LOOK LIKE A FOOL.

OR, perhaps its the case that different contexts have different levels of effort. Running a spike can be an important way to promote new ideas across an org and show how things can be done differently. It can be a political tool that has positive impact, because there's a lot more to a business than simply writing good code. However if your org is horrible then it can backfire in the way that was described. Maybe business are too aggressive and trample on dev, maybe dev doesn't have a spine, maybe nobody spoke up about what a fucking disaster it was going to be, maybe they did and nobody listened. Those are all organisational issues akin to an exploitable code base but embedded into the org instead of the code.

These issues are not the direct fault of the spike, its the fault of the org, just like the idiot that took your poorly formatted comment and put it on the front page of Vogue.

jcgrillo an hour ago | parent [-]

Grammatical errors, formatting mistakes, or bad writing in general aren't something the magazine publisher can be held liable for, it may be embarrassing but it's not illegal or unethical. Publishing outright falsehoods about someone is though--we call that defamation. Knowingly shipping a broken, insecure system isn't all that different. Of course the people who came along later and chucked it into prod without actually reviewing it were also negligent, but that doesn't render the first guy blameless.

Quarrelsome 43 minutes ago | parent [-]

If it was only supposed to be a spike then it does render the first guy somewhat blameless. Especially if the org was made aware of the issues, which I imagine they were if someone had raised the issue of the exploits in the code base.

I mean I could take a toddlers tricycle and try to take it onto the motorway. Can we blame the toy company for that? It has wheels, it goes forward, its basically a car, right? In the same way a spike is basically something we can ship right now.

f1shy 4 hours ago | parent | prev [-]

That is the giat of the leftpad history, isn’t it?

giancarlostoro 2 hours ago | parent | prev [-]

> except for those 10% of situations where you know in advance that crucial performance is absolutely necessary

Yeah like, NOT indexing any fields in a database, that'll become a problem very quickly. ;)

tombert 2 hours ago | parent | prev | next [-]

The biggest issue I have with premature optimization is stuff that really doesn't matter.

For example, in Java I usually use ConcurrentHashMap, even in contexts that a regular HashMap might be ok. My reasoning for this is simple: I might want to use it in a multithreaded context eventually and the performance differences really aren't that much for most things; uncontested locks in Java are nearly free.

I've gotten pull requests rejected because regular HashMaps are "faster", and then the comments on the PR ends up with people bickering about when to use it.

In that case, does it actually matter? Even if HashMap is technically "faster", it's not much faster, and maybe instead we should focus on the thing that's likely to actually make a noticeable difference like the forty extra separate blocking calls to PostgreSQL or web requests?

So that's the premature optimization that I think is evil. I think it's perfectly fine at the algorithm level to optimize early.

toast0 3 hours ago | parent | prev | next [-]

I only use mature optimizations, so I'm good.

Thinking about the overall design, how its likely to be used, and what the performane and other requirements are before aggregating the frameworks of the day is mature optimization.

Then you build things in a reasonable way and see if you need to do more for performance. It's fun to do more, but most of the time, building things with a thought about performance gets you where you need to be.

The I don't need to think about performance at all camp, has a real hard time making things better later. For most things, cycle counting upfront isn't useful, but thinking about how data will be accessed and such can easily make a huge difference. Things like bulk load or one at a time load are enormous if you're loading lots of things, but if you'll never load lots of things, either works.

Thinking about concurrency, parallelism, and distributed systems stuff before you build is also pretty mature. It's hard to change some of that after you've started.

Shorel 2 hours ago | parent [-]

That first sentence.

I want it in a t-shirt. On billboards. Everywhere :)

tananaev 5 hours ago | parent | prev | next [-]

With modern tools it should be pretty easy to build scalable solutions. I take premature optimization as going out of your way to optimize something that's already reasonable. Not that you should write really bad code as a starting point.

Sammi 5 hours ago | parent | next [-]

The problem is that that this term gets misused to say the opposite of what it was intended for.

It's particularly the kind of people who like to say "hur hur don't prematurely optimize" that don't bother writing decent software to begin with and use the term as an excuse to write poor performing code.

Instead of optimizing their code, these people end up making excuses so they can pessimize it instead.

Shorel 2 hours ago | parent | next [-]

To me that's the people who write desktop software in Electron. Hate that.

pydry 3 hours ago | parent | prev [-]

In my career Ive seen about 1000 instances of somebody trying to optimize something prematurely.

Usually those people also have a good old whinge about the premature optimization quote being wrong or misinterpreted and general attitudes to software efficiency.

Not once have I ever seen somebody try to derail a process of "ascertain speed is an issue that should be tackled" -> "profile" -> fix the hot path.

cstoner 4 hours ago | parent | prev [-]

Yeah, I interpret "premature optimization" as taking a request that takes 500ms and focusing on saving a couple ms by refactoring logic to avoid a SQL JOIN or something.

Your users are not going to notice. Sure, it's faster but it's not focused on the problem.

NikolaosC 4 hours ago | parent | prev | next [-]

Spent 6 months last year ripping out an abstraction layer that made every request 40ms slower. We profiled, found the hot path, couldn't fix it without a rewrite. The "optimize later" school never tells you later sometimes means never

tombert 2 hours ago | parent [-]

I'd say it usually means "never".

I also find it a bit annoying is that most people just make shit up about stuff that is "faster". Instead of measuring and/or looking at the compiled bytecode/assembly, people just repeat tribal knowledge about stuff that is "faster" with no justification. I find that this is common amongst senior-level people at BigCos especially.

When I was working in .NET land, someone kept telling me that "switch statements are faster" than their equivalent "if" statements, so I wrote a very straightforward test comparing both, and used dotpeek to show that they compile to the exact same thing. The person still insisted that switch is "faster", I guess because he had a professor tell him this one time (probably with more appropriate context) and took whatever the professor said as gospel.

bluGill 2 hours ago | parent [-]

I've seen a lot of requests to obtimize code where we can measure the optimal versions saves a few nanoseconds. I just deleted some 'optimal code' that took a lot of mutexes and so was only faster when there is no contetion but in real world multi-writer situations the easy code wins. (Shared memory vs local socket for ipc)

tombert an hour ago | parent [-]

I don't write a lot of super low level stuff, so maybe things are different there, but at least in the normal user space level I've found it pretty rare that explicit mutexes ever beat the performance of an (in my opinion) easier design using queues and/or something like ZeroMQ.

Generally I've found that the penalty, even without contention, is pretty minimal, and it almost always wins under contention.

bluGill an hour ago | parent [-]

To be fair the code in question was written many years ago - before anyone I know had heard of zeromq (it existed but wasn't known). it would be possible to optimize the mutexs out I'm sure - but the big problem wasn't speed it was complexity of code that I now maintain. Since local sockets are easier and faster I'm safe deleting the more complex code that should have never been written.

davedx 4 hours ago | parent | prev | next [-]

"today, performance is mostly about architectural choices, and it has to be given consideration right from the start"

This doesn't make sense. Why is performance (via architectural choices) more important today than then?

You can build a snappy app today by using boring technology and following some sensible best practices. You have to work pretty hard to need PREMATURE OPTIMIZATION on a project -- note the premature there

jandrewrogers 3 hours ago | parent | next [-]

The big thing that changed is that almost all software performance today is bandwidth-bound at the limit. Not computation-bound. This transition was first noticed in supercomputing around 25 years ago.

Optimization of bandwidth-bound code is almost purely architectural in nature. Most of our software best practices date from a time when everything was computation-bound such that architecture could be ignored with few bad effects.

f1shy 4 hours ago | parent | prev | next [-]

I agree. But I have to say, when defining the architecture, there are things known that will be terrible bottlenecks later. They should be avoided. Just as the previous comment, about defining proper indices in a database. Optimization means making something that is already “good” and correct better. There is no excuse to make a half ass, bug ridden shitty software, under the excuse “optimization is for later” that is technical debt generation: 2 very different things.

Nevermark 3 hours ago | parent | prev | next [-]

> You can build a snappy app today by using boring technology and following some sensible best practices.

If you are building something with similar practical constraints for the Nth time this is definitely true.

You are inheriting “architecture” from your own memory and/or tools/dependencies that are already well fit to the problem area. The architectural performance/model problem already got a lot of thought.

Lots of problems are like that.

But if you are solving a problem where existing tools do a poor job, you better be thinking about performance with any new architecture.

kqr 4 hours ago | parent | prev | next [-]

In the 1970s computer systems spanned fewer orders of magnitude. Operations generally took somewhere between maybe 1 and 10^8 CPU cycles. Today, the range is closer to 10^-1 to 10^13.

paulddraper 3 hours ago | parent | prev [-]

> Why is performance (via architectural choices) more important today than then?

There were fewer available layers of abstraction.

Whether you wrote in ASM, C, or Pascal, there was a lot less variance than writing in Rust, JavaScript, Python.

ghosty141 5 hours ago | parent | prev | next [-]

What's the problem with SOLID? It's very very rare that I see a case where going against SOLID leads to better design.

GuB-42 4 hours ago | parent | next [-]

SOLID tend to encourage premature abstraction, which is a root of evil that is more relevant today than optimization.

SOLID isn't bad, but like the idea of premature optimization, it can easily lead you into the wrong direction. You know how people make fun of enterprise code all the time, that's what you get when you take SOLID too far.

In practice, it tends to lead to a proliferation of interfaces, which is not only bad for performance but also result in code that is hard to follow. When you see a call through an interface, you don't know what code will be run unless you know how the object is initialized.

sroussey 4 hours ago | parent | prev | next [-]

In a way, SOLID is premature optimization. You are optimizing abstractions before knowing how the code is used in practice. Lots of code will be written and never changed again, but a minority will see changes quite a bit. Concentrate there. Like you don't need to optimize things that aren't in hot code (usually, omg experience will tell you that all rules have exceptions, including the exceptions).

ghosty141 3 hours ago | parent [-]

> Lots of code will be written and never changed again, but a minority will see changes quite a bit. Concentrate there

I think the most important principle above all is knowing when not to stick to them.

For example if I know a piece of code is just some "dead end" in the application that almost nothing depends on then there is little point optimizing it (in an architectural and performance sense). But if I'm writing a core part of an application that will have lots of ties to the rest, it totally does make sense keeping an eye on SOLID for example.

I think the real error is taking these at face value and not factoring in the rest of your problem domain. It's way too simple to think SOLID = good, else bad.

dzjkb 5 hours ago | parent | prev | next [-]

here's a nice critique of SOLID principles:

https://www.tedinski.com/2019/04/02/solid-critique.html

newsoftheday 4 hours ago | parent | next [-]

They start by indicating people don't understand, “A module should have only one reason to change.”. Reading more of that article, it's clear the author doesn't understand much about software engineering and sounds more like a researcher who just graduated from putting together 2+2.

segmondy 4 hours ago | parent [-]

The great thing bout the net is also it's biggest problem. Anyone can write a blog and if it looks nice, sounds polished they could sway a large group. I roll my eyes so strong at folks that reject SOLID principles and design patterns.

tracker1 4 hours ago | parent | next [-]

Something, something, wrong abstractions are worse than no abstractions.

SOLID approaches aren't free... beyond that keeping code closer together by task/area is another approach. I'm not a fan of premature abstraction, and definitely prefer that code that relates to a feature live closer together as opposed to by the type of class or functional domain space.

For that matter, I think it's perfectly fine for a web endpoint handler to make and return a simple database query directly without 8 layers of interfaces/classes in between.

Beyond that, there are other approaches to software development that go beyond typical OOP practices. Something, something, everything looks like a nail.

The issues that I have with SOLID/CLEAN/ONION is that they tend to lead to inscrutable code bases that take an exponentially long amount of time for anyone to come close to learning and understanding... Let alone the decades of cruft and dead code paths that nobody bothered to clean up along the way.

The longest lived applications I've ever experienced tend to be either the simplest, easiest to replace or the most byzantine complex monstrosities... and I know which I'd rather work on and support. After three decades I tend to prioritize KISS/YAGNI over anything else... not that there aren't times where certain patterns are needed, so much as that there are more times where they aren't.

I've worked on one, singular, one application in three decades where the abstractions that tend to proliferate in SOLID/CLEAN/ONION actually made sense... it was a commercial application deployed to various govt agencies that had to support MS-SQL, Oracle and DB2 backends. Every, other, time I've seen an excess of database and interface abstractions have been instances that would have been better solved in other, less performance impacting ways. If you only have a single concrete implementation of an interface, you probably don't need that interface... You can inherit/override the class directly for testing.

And don't get me started on keeping unit tests in a completely separate project... .Net actually makes it painful to put your tests with your implementation code. It's one of my few actual critiques about the framework itself, not just how it's used/abused.

f1shy 3 hours ago | parent | prev [-]

I have seen way too often, advocates of SOLID and patterns to have religious arguments: I don’t like it. That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas. About patterns, I cannot really say as much positive. They are not bad per-se. But I’ve seen they have made lots of harm. In the gang of 4 book, in the preface, I think, says something like “this list is neither exhaustive, nor complete, and often inadequate” the problem is every single person I know who was exposed to the book, try to hammer every problem into one pattern (in the sense of [1]). Also insist in using the name everywhere like “facade_blabla” IMHO the pattern may be Façade, but putting that through the names of all classes and methods, is not good design.

[1] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

ghosty141 3 hours ago | parent [-]

> That being said, I think there is nothing bad in SOLID, as long as treated as principles and not religious dogmas

This should be the header of the website. I think the core of all these arguments is people thinking they ARE laws that must be followed no matter what. And in that case, yeah that won't work.

gavmor 3 hours ago | parent | prev | next [-]

This doesn't seem to be a critique of the principles so much as a critique of their phrasing.

Even his "critique" of Demeter is, essentially, that it focuses on an inconsequential aspect of dysfunction—method chaining—which I consider to be just one sme that leads to the larger principle which—and we, apparently, both agree on this—is interface design.

4 hours ago | parent | prev [-]
[deleted]
someguyiguess 5 hours ago | parent | prev | next [-]

It only applies to the object oriented programming paradigm

mrkeen 3 hours ago | parent | next [-]

Negative.

The only part of SOLID that is perhaps OO-only is Liskov Substitution.

L is still a good idea, but without object-inheritance, there's less chance of shooting yourself in the foot.

marcosdumay 4 hours ago | parent | prev | next [-]

That's understating the problem. It mandates OOP.

If you follow SOLID, you'll write OOP only, with always present inheritance chains, factories for everything, and no clear relation between parameters and the procedures that use them.

Exoristos 4 hours ago | parent | prev [-]

This is only superficially true. Here's a fair discussion that could serve as a counterpoint: https://medium.com/@ignatovich.dm/applying-solid-principles-...

paulddraper 3 hours ago | parent | prev [-]

It causes excessive abstraction, and more verbose code.

L and I are both pretty reasonable.

But S and D can easily be taken to excess.

And O seems to suggest OO-style polymorphism instead of ADTs.

ghosty141 3 hours ago | parent [-]

This is similar to my view. All these "laws" should alwaye be used as guidance not as actual laws. Same with O. I think its good advice to design software so adding features that are orthogonal to other features don't require modifying much code.

That's how I view it. You should design your application such that extension involves little modifying of existing code as long as it's not necessary from a behavior or architectural standpoint.

SAI_Peregrinus an hour ago | parent [-]

Of course you can do that & still make a mess. E.g. by deciding that all your behavior will be "configurable" by coding inside strings in a YAML file, and what YAML files you load at runtime determine which features you get. Sure, they might conflict, but that's the fault of whoever wrote that "configuration" YAML. (Replace YAML with XML for a previous era version of this bad idea).

jnpnj 5 hours ago | parent | prev | next [-]

It's a very interesting topic. Even when designing a system, how to modularize, it's healthy to wait until the whole context is in sight. It's a bit of a black art, too early or too late you pay some price.

xnx 5 hours ago | parent | prev | next [-]

> making your code more complicated and still slower than if you thought about performance at the start.

Not if your optimization for performance is some Rube Goldberg assemblage of microservices and an laundry list of AWS services.

causal 5 hours ago | parent | next [-]

Exactly. Today "premature optimization" almost always means unnecessary infra or abstractions or some other complexity- not DS&A choices.

pcblues 4 hours ago | parent | prev [-]

I still don't understand microservices for anything short of a NAG of four level architecture.

ozim 4 hours ago | parent | prev | next [-]

Sounds like we agree.

Bunch of stuff is done for us. Using postgres having indexes correct - is not premature optimization, just basic stuff to be covered.

Having double loop is quadratic though. Parallelism is super fun because it actually might make everything slower instead of faster.

vanguardanon 2 hours ago | parent | prev | next [-]

Is this less relevant today where if you do go down the wrong architecture it is much cheaper to rewrite everything again in something that is better?

cogman10 5 hours ago | parent | prev | next [-]

Completely agreed here [1].

And as I point out, what Knuth was talking about in terms of optimization was things like loop unrolling and function inlining. Not picking the right datastructure or algorithm for the problem.

I mean, FFS, his entire book was about exploring and picking the right datastructures and algorithms for problems.

[1] https://news.ycombinator.com/item?id=47849194

kgwxd an hour ago | parent | prev | next [-]

All aphorisms are awful.

dec0dedab0de 3 hours ago | parent | prev | next [-]

I like it as a way to remind myself to not get caught up in the minutiae.

throwaway5752 5 hours ago | parent | prev | next [-]

"Premature optimization is the root of all evil"

Decades in, this is the worst of all of them. Misused by laziness or malice, and nowhere near specific enough.

The graveyard of companies boxed in by past poor decisions is sprawling. And the people that made those early poor decisions bounce around field talking about their "successful track record" of globally poor and locally good architectural decisions that others have had to clean up.

It touches on a real problem, though, but it should be stricken form the record and replaced with a much better principle. "Design to the problem you have today and the problems you have in 6 months if you succeed. Don't design to the problems you'll have have next year if it means you won't succeed in 6 months" doesn't roll off the tongue.

tracker1 4 hours ago | parent [-]

On your last bit, I definitely agree... personally I've leaned more and more into KISS above all else... simple things that are easy to replace are easily replaced only when you need to. Similarly, I also tend to push for initial implementations of many/most things in a scripted language first, mostly for flexibility/simplicity to get a process "right" before worrying about a lot of other things.

One thing that came out of the no-sql/new-sql trends in the past decade and a half is that joins are the enemy of performance at scale. It really helps to know and compromise on db normalization in ways such as leaning on JSON/XML for non-critical column data as opposed to 1:1/children/joins a lot of the time. For that matter, pure performance and vertical scale have shifted a lot of options back from the brink of micro service death by a million paper cuts processes.

dorkitude 3 hours ago | parent | prev | next [-]

Found the overbuilder!

dartharva 2 hours ago | parent | prev | next [-]

Strange, this is actually one of the most important things I learnt the hard way as an analyst having had the misfortune of being forced to do local-level data engineering after being tasked with doing extensive BI on heavy data volumes (I don't have engineering education) without cloud ETL. Had someone told me this simple statement before time it would have saved me a LOT of pain and effort.

m3kw9 2 hours ago | parent | prev | next [-]

If you done enough premature optimization, you will know it’s usually wasted, detrimental and bad trade

tonymet 5 hours ago | parent | prev | next [-]

The wheel is a premature optimization to someone who never figured out how to build one.

jollyllama 5 hours ago | parent | prev | next [-]

You ARE going to need it.

enraged_camel 5 hours ago | parent | prev | next [-]

>> Today, late optimization is just as bad as premature optimization, if not more so.

You are right about the origin of and the circumstances surrounding the quote, but I disagree with the conclusion you've drawn.

I've seen engineers waste days, even weeks, reaching for microservices before product-market fit is even found, adding caching layers without measuring and validating bottlenecks, adding sharding pre-emptively, adding materialized views when regular tables suffice, paying for edge-rendering for a dashboard used almost entirely by users in a single state, standing up Kubernetes for an internal application used by just two departments, or building custom in-house rate limiters and job queues when Sidekiq or similar solutions would cover the next two years.

One company I consulted for designed and optimized for an order of magnitude more users than were in the total addressable market for their industry! Of that, they ultimately managed to hit only 3.5%.

All of this was driven by imagined scale rather than real measurements. And every one of those choices carried a long tail: cache invalidation bugs, distributed transactions, deployment orchestration, hydration mismatches, dependency array footguns, and a codebase that became permanently harder to change. Meanwhile the actual bottlenecks were things like N+1 queries or missing indexes that nobody looked at because attention went elsewhere.

cstoner 4 hours ago | parent | next [-]

Thank your for posting this. I disagreed with OP but couldn't _quite_ find the words to describe why. But your post covers what i was trying to say.

I was quite literally asked to implement an in-memory cache to avoid a "full table scan" caused by a join to a small DB table recently. Our architect saw "full table scans" in our database stats and assumed that must mean a performance problem. I feel like he thought he was making a data-driven profiling decision, but seemed to misunderstand that a full-table scan is faster for a small table than a lookup. That whole table is in RAM in the DB already.

So now we have a complex Redis PubSub cache invalidation strategy to save maybe a ms or two.

I would believe that we have performance problems in this chunk of code, and it's possible an in-memory cache may "fix" the issue, but if it does, then the root of the problem was more likely an N+1 query (that an in-memory cache bandaids over). But by focusing on this cache, suddenly we have a much more complex chunk of code that needs to be maintained than if we had just tracked down the N+1 query and fixed _that_

Esophagus4 3 hours ago | parent | prev [-]

> All of this was driven by imagined scale rather than real measurements

Yes. When I was a young engineer, I was asked to design something for a scale we didn’t even get close to achieving. Eventual consistency this, event driven conflict resolution that… The service never even went live because by the time we designed it, everyone realized it was a waste of time.

I learned it makes no sense to waste time designing for zillions of users that might never come. It’s more important to have an architecture that can evolve as needs change rather than one that can see years into the future (that may never come).

tehjoker 5 hours ago | parent | prev | next [-]

I would venture that this statement is not true for library authors. Performance is a large factor in competitive advantage, especially in domains like image analysis or processing large corpuses of text etc.

In these domains, algorithm selection, and fine tuning hot spots pays off significantly. You must hit minimum speeds to make your application viable.

EGreg 5 hours ago | parent | prev | next [-]

I would add this: https://magarshak.com/blog/perfection-is-the-enemy-of-the-go...

CyberDildonics 3 hours ago | parent | prev | next [-]

Unfortunately people do keep repeating it to excuse the fact that they don't know how to optimize in the first place.

Anyone who has done optimization even a little knows that it isn't very difficult, but you do need to plan and architect for it so you don't have to restructure you whole program to get it to run well.

Mostly it's just rationalization, people don't know the skill so they pretend it's not worth doing and their users suffer for it.

If software and website were even reasonably optimized people could just use a computer as powerful as a rasberry pi 5 (except for high res video) for most of what they do day to day.

snarfy 4 hours ago | parent | prev [-]

I definitely hate SOLID more.