Remix.run Logo
conartist6 8 hours ago

Remember that these "laws" contain so many internal contradictions that when they're all listed out like this, you can just pick one that justifies what you want to justify. The hard part is knowing which law break when, and why

jimmypk 7 hours ago | parent | next [-]

Postel's Law vs. Hyrum's Law is the canonical example. Postel says be liberal in what you accept — but Hyrum's Law says every observable behavior of your API will eventually be depended on by someone. So if you're lenient about accepting malformed input and silently correcting it, you create a user base that depends on that lenient behavior. Tightening it later is a breaking change even if it was never documented. Being liberal is how you get the Hyrum surface area.

The resolution I've landed on: be strict in what you accept at boundaries you control (internal APIs, config parsing) and liberal only at external boundaries where you can't enforce client upgrades. But that heuristic requires knowing which category you're in, which is often the hard part.

physicles 3 hours ago | parent | next [-]

I’m one of those that have thrown out Postel’s law entirely. Maybe the issue is that it never defines “strict”, “liberal”, and “accept”. But at least for public APIs, it never made sense to me.

If I accidentally accept bad input and later want to fix that, I could break long-time API users and cause a lot of human suffering. If my input parsing is too strict, someone who wants more liberal parsing will complain, and I can choose to add it before that interaction becomes load-bearing (or update my docs and convince them they are wrong).

The stark asymmetry says it all.

Of course, old clients that can’t be upgraded have veto power over any changes that could break them. But that’s just backwards compatibility, not Postel’s Law.

Source: I’m on a team that maintains a public API used by thousands of people for nearly 10 years. Small potatoes in internet land but big enough that if you cause your users pain, you feel it.

zffr 2 hours ago | parent | next [-]

One example where I think the law does make sense is for website URL paths.

Over time the paths may change, and this can break existing links. IMO websites should continue to accept old paths and redirect to the new equivalents. Eventually the redirects can be removed when their usage drops low enough.

zaphar 3 hours ago | parent | prev [-]

I probably use a different interpretation of Postel's law. I try not "break" for anything I might receive, where break means "crash, silently corrupt data, so on". But that just means that I return an error to the sender usually. Is this what Postel meant? I have no idea.

nothrabannosir 5 hours ago | parent | prev | next [-]

I used to see far more references to Postel’s law in the 00s and early 10s. In the last decade, that has noticeably shifted towards hyrum’s law. I think it’s a shift in zeitgeist.

zahlman 7 hours ago | parent | prev | next [-]

I've always thought of Hyrum's Law more as a Murphy-style warning than as actionable advice.

dmoy 2 hours ago | parent [-]

Yea hyrum's law is like an observation of what happens when you personally try to make 200,000+ distinct commits to google3 to fix things that are broken (as hyrum did, mind you this is before LLMs), and people come climbing out of the woodwork to yell at you with fistfuls of xkcd/1172

throwaway173738 7 hours ago | parent | prev | next [-]

I look at Postel’s law more as advice on how to parse input. At some point you’re going to have to upgrade a client or a server to add a new field. If you’ve been strict, then you’ve created a big coordination problem, because the new field is a breaking change. But if you’re liberal, then your systems ignore components of the input that they don’t recognize. And that lets you avoid a fully coordinated update.

astrobe_ 3 hours ago | parent | prev | next [-]

This reminds me of a comment I read here a long time ago; it was about XML and how DTDs were supposed to permit one to be strict. However, in reality, the person said, if the the other end who is sending you broken XML is a big corp who refuses to fix it, then you have no choice but accept it.

Bottom line: it's all a matter of balance of powers. If you're the smaller guy in the equation, you'll be "Postel'ed" anyway.

Yet Postel's law is still in the "the road to hell is paved with good intentions" category, for the reason you explain very well (AKA XKCD #1172 "Workflow"). Wikipedia even lists a couple of major critics about it [1].

[1] https://en.wikipedia.org/wiki/Robustness_principle

jimbokun an hour ago | parent [-]

Would be tempted to stick a proxy in there that checks if the data is malformed in the expected way, and if so converts it to the valid form before forwarding to the real service.

someguyiguess 5 hours ago | parent | prev [-]

I propose we add your law: Jimmy’s Law

AussieWog93 7 hours ago | parent | prev | next [-]

DRY is my pet example of this.

I've seen CompSci guys especially (I'm EEE background, we have our own problems but this ain't one of them) launch conceptual complexity into the stratosphere just so that they could avoid writing two separate functions that do similar things.

busfahrer 7 hours ago | parent | next [-]

I think I remember a Carmack tweet where he mentioned in most cases he only considers it once he reaches three duplicates

michaelcampbell 6 hours ago | parent | next [-]

The "Rule of 3" is a pretty well known rule of thumb; I suspect Carmack would admit it predates him by a fair bit.

mcv 7 hours ago | parent | prev | next [-]

I once heard of a counter-principle called WET: Write Everything Twice.

jimbokun an hour ago | parent | prev | next [-]

Another law for the list!

whattheheckheck 7 hours ago | parent | prev [-]

Why 3? What is this baseball?

Take the 5 Rings approach.

The purpose of the blade is to cut down your opponent.

The purpose of software is to provide value to the customer.

It's the only thing that matters.

You can also philosophize why people with blades needed to cut down their opponents along with why we have to provide value to the customer but thats beyond the scope of this comment

marcosdumay 4 hours ago | parent | next [-]

Why baseball? You don't use the number 3 in any other context?

If you write a lot of code, the odds of something repeating in another place just by coincidence are quite large. But the odds of the specific code that repeated once repeating again are almost none.

That's a basic rule from probability that appears in all kinds of contexts.

Anyway, both DRY and WET assume the developers are some kind ignorant automaton that can't ever know the goal of their code. You should know if things are repeating by coincidence or not.

jimbokun an hour ago | parent | prev | next [-]

Rule of 3 will greatly benefit the customer over time as it lessens the probability of bugs and makes adding new features faster.

ta20240528 7 hours ago | parent | prev | next [-]

"The purpose of software is to provide value to the customer."

Partially correct. The purpose of your software to its owners is also to provide future value to customers competitively.

What we have learnt is that software needs to be engineered: designed and structured.

nradov 5 hours ago | parent [-]

And yet some of the software most valuable to customers was thrown together haphazardly with nothing resembling real engineering.

shermantanktop 5 hours ago | parent | next [-]

If you get lucky doing that you might regret it. Especially with non-technical management.

Making software is a back-of-house function, in restaurant terms. Nobody out there sees it happen, nobody knows what good looks like, but when a kitchen goes badly wrong, the restaurant eventually closes.

galbar 5 hours ago | parent | prev | next [-]

These projects quickly reach a point where evolving it further is too costly and risky. To the point that the org owning it will choose to stop development to do a re-implementation which, despite being a very costly and risky endeavor, ends up being a the better choice.

This is a very costly way of developing software.

nradov 5 hours ago | parent [-]

It's easy to say that organizations should do it right the first time, in terms of applying proper engineering practices. But they often didn't have the time, capital, and skillset to do that. Not ideal, but that's often how things work in the real world and it will never change.

datadrivenangel 3 hours ago | parent [-]

Organizations should do it not catastrophically wrongly, especially once a core design / concept is mostly solidified. Putting a little time into reliability and guardrails prevents a huge amount of downside.

I've been at organizations that don't think engineers should write tests because it takes too much time and slows them down...

lamasery 5 hours ago | parent | prev [-]

Plenty of businesses or products within businesses stagnate and fail because their software got too expensive to maintain and extend. Not infrequently, this happens before it even sees a public release. Any business that can't draw startup-type levels of investment to throw effectively infinite amounts of Other People's Money at those kinds of problems, risks that end if they allow their software to get too messed-up.

The "who gives a shit, we'll just rewrite it at 100x the cost" approach to quality is very particular to the software startup business model, and doesn't work elsewhere.

TheGRS 2 hours ago | parent | prev [-]

One is one. Two is a coincidence. And three is a trend. That's my personal head canon.

ericmcer 3 hours ago | parent | prev | next [-]

DRY and KISS were right next to each other which I thought was funny.

aworks 6 hours ago | parent | prev | next [-]

I worked for a company that also had hardware engineers writing RTL. Our software architect spent years helping that team reuse/automate/modularize their code. At a mininum, it's still just text files with syntax, despite rather different semantics.

zahlman 7 hours ago | parent | prev | next [-]

I've heard that story a few times (ironically enough) but can't say I've seen a good example. When was over-architecture motivated by an attempt to reduce duplication? Why was it effective in that goal, let alone necessary?

mosburger 6 hours ago | parent | next [-]

I think there is often tension between DRY and "thing should do only one thing." E.g., I've found myself guilty of DRYing up a function, but the use is slightly different in a couple places, so... I know, I'll just add a flag/additional function argument. And you keep doing that and soon you have a messed up function with lots of conditional logic.

The key is to avoid the temptation to DRY when things are only slightly different and find a balance between reuse and "one function/class should only do one thing."

physicles 3 hours ago | parent [-]

For sure. I feel I need all of my experience to discern the difference between “slightly different, and should be combined” and “slightly different, and you’ll regret it if you combine them.”

One of my favorite things as a software engineer is when you see the third example of a thing, it shows you the problem from a different angle, and you can finally see the perfect abstraction that was hiding there the whole time.

dasil003 6 hours ago | parent | prev | next [-]

Buy me a beer and I can tell you some very poignant stories. The best ones are where there is a legitimate abstraction that could be great, assuming A) everyone who had to interact with the abstraction had the expertise to use it, B) the details of the product requirements conformed to the high level technical vision, now and forever, and C) migrating from the current state to the new system could be done in a bounded amount of time.

My view is over-engineering comes from the innate desire of engineers to understand and master complexity. But all software is a liability, every decision a tradeoff that prunes future possibilities. So really you want to make things as simple as possible to solve the problem at hand as that will give you more optionality on how to evolve later.

onionisafruit 5 hours ago | parent | prev | next [-]

I’ll give a simplified example of something I have at work right now. The program moves data from the old system to the new system. It started out moving a couple of simple data types that were basically the same thing by different names. It was a great candidate for reusing a method. Then a third type was introduced that required a little extra processing in the middle. We updated the method with a flag to do that extra processing. One at a time, we added 20 more data types that each had slightly different needs. Now the formerly simple method is a beast with several arguments that change the flow enough that there are a probably just a few lines that get run for all the types. If we didn’t happen to start with two similar types we probably wouldn’t have built this spaghetti monster.

caminante 6 hours ago | parent | prev | next [-]

IMHO, it comes down to awareness/probability about the need to future proof or add defensive behavior.

The spectrum is [YAGNI ---- DRY]

A little less abstract: designing a UX comes to mind. It's one thing to make something workable for you, but to make it for others is way harder.

markburns 5 hours ago | parent | prev [-]

I saw a fancy HTML table generator that had so many parameters and flags and bells and whistles that it took IIRC hundreds of lines of code to save writing a similar amount of HTML in a handful of different places.

Yes the initial HTML looked similar in these few places, and the resultant usage of the abstraction did not look similar.

But it took a very long time reading each place a table existed and quite a bit longer working out how to get it to generate the small amount of HTML you wanted to generate for a new case.

Definitely would have opted for repetition in this particular scenario.

pydry 7 hours ago | parent | prev | next [-]

DRY is misunderstood. It's definitely a fundamental aspect of code quality it's just one of about 4 and maximizing it to the exclusion of the others is where things go wrong. Usually it comes at the expense of loose coupling (which is equally fundamental).

The goal ought to be to aim for a local minima of all of these qualities.

Some people just want to toss DRY away entirely though or be uselessly vague about when to apply it ("use it when it makes sense") and thats not really much better than being a DRY fundamentalist.

layer8 7 hours ago | parent [-]

DRY is misnamed. I prefer stating it as SPOT — Single Point Of Truth. Another way to state it is this: If, when one instance changes in the future, the other instance should change identically, then make it a single instance. That’s really the only DRY criterion.

xnorswap 7 hours ago | parent | next [-]

I like this a lot more, because it captures whether two things are necessarily the same or just happen to be currently the same.

A common "failure" of DRY is coupling together two things that only happened to bear similarity while they were both new, and then being unable to pick them apart properly later.

CodesInChaos 6 hours ago | parent [-]

> then being unable to pick them apart properly later.

Which is often caused by the "midlayer mistake" https://lwn.net/Articles/336262/

mosburger 6 hours ago | parent | prev | next [-]

I said this elsewhere in the comments, but I think there's sort of a fundamental tension that shows up sometimes between DRY and "a function/class should only do one thing." E.g., there might be two places in your code that do almost identical things, so there's a temptation to say "I know! I'll make a common function, I'll just need to add a flag/extra argument..." and if you keep doing that you end up with messy "DRY" functions with tons of conditional logic that tries to do too much.

Yeah there are ways to avoid this and you need to strike balances, but sometimes you have to be careful and resist the temptation to DRY everything up 'cuz you might just make it brittler (pun intended).

gavmor 4 hours ago | parent | prev | next [-]

Yes, and there are many different kinds of truth, so when two arise together—we can call this connascence—we can categorize how these instances overlap: Connascence of Name, Connascence of Algorithm, etc.

Silamoth 6 hours ago | parent | prev | next [-]

That’s how I understand it as well. It’s not about an abstract ideal of duplication but about making your life easier and your software less buggy. If you have to manually change something in 5 different places, there’s a good chance you’ll forget one of those places at some point and introduce a bug.

mcv 7 hours ago | parent | prev | next [-]

That's how I understood it. If you add a new thing (constant, route, feature flag, property, DB table) and it immediately needs to be added in 4 different places (4 seems to be the standard in my current project) before you can use it, that's not DRY.

mjr00 6 hours ago | parent [-]

> If you add a new thing (constant, route, feature flag, property, DB table) and it immediately needs to be added in 4 different places (4 seems to be the standard in my current project) before you can use it, that's not DRY.

The tricky part is that sometimes "a new thing" is really "four new things" disguised as one. A database table is a great example because it's a failure mode I've seen many times. A developer has to do it once and they have to add what they perceive as the same thing four times: the database table itself, the internal DB->code translation e.g. ORM mapping, the API definition, and maybe a CRUD UI widget. The developer thinks, "oh, this isn't DRY" and looks to tools like Alembic and PostGREST or Postgraphile to handle this end-to-end; now you only need to write to one place when adding a database table, great!

It works great at first, then more complex requirements come down: the database gets some virtual generated columns which shouldn't be exposed in code, the API shouldn't return certain fields, the UI needs to work off denormalized views. Suddenly what appeared to be the same thing four times is now four different things, except there's a framework in place which treats these four things as one, and the challenge is now decoupling them.

Thankfully most good modern frameworks have escape valves for when your requirements get more complicated, but a lot of older ones[0] really locked you in and it became a nightmare to deal with.

[0] really old versions of Entity Framework being the best/worst example.

mcv 6 hours ago | parent [-]

I believe that was the point of Ruby on Rails: that you really had to just create the class, and the framework would create the table and handle the ORM. Or maybe you still had to write the migration; it's been as while. That was pretty spectacular in its dedication to DRY, but also pretty extreme.

But the code I'm talking about is really adding the same thing in 4 different places: the constant itself, adding it to a type, adding it to a list, and there was something else. It made it very easy to forget one step.

pydry 7 hours ago | parent | prev [-]

Renaming it doesnt change the nature of the problem.

There should often be two points of truth because having one would increase the coupling cost more than the benefits that would be derived from deduplication.

iwontberude 7 hours ago | parent | prev [-]

Why do the have to be so smart but so annoying at the same time?

blandflakes 7 hours ago | parent | prev | next [-]

This was also true of Amazon's Leadership Principles. They are pretty reasonable guidelines, but in a debate, it really came down to which one you could most reasonably weaponize in favor of your argument, even to the detriment of several others.

Which maybe is also fine, I dunno :)

rustyhancock 6 hours ago | parent [-]

It's because they are heurists intended to be applied by knowledgeable and experienced humans.

It can be quite hard to explain when a student asks why you did something a particular way. The truthful answer is that it felt like the right way to go about it.

With some thought you can explain it partly - really justify the decision subconsciously made.

If they're asking about a conscious decision that's rarely much more helpful that you having to say that's what the regulations, or guidelines say.

Where they really learn is seeing those edge cases and gray areas

davedx 4 hours ago | parent | prev | next [-]

As a very senior SWE with a decent amount of eng decision making responsibility these days I still find I get so much mileage out of KISS and YAGNI that I never really think about any other laws.

So much SWE is overengineering. Just like this website to be honest. You don't get away with all that bullshit in other eng professions where your BoM and labour costs are material.

rapnie 6 hours ago | parent | prev | next [-]

I like alternatives to formal IT lawfare, like CUPID [0] properties for Joyful coding by Dan North, as alternative to SOLID principles.

[0] https://cupid.dev/

jolt42 4 hours ago | parent | prev | next [-]

I'll propose this as the only unbreakable law: "everything in moderation", which I feel implies any law is breakable, which now this is sounding like the barber's paradox. What else does anyone propose as unbreakable?

alok-g an hour ago | parent | next [-]

>> everything in moderation

Saying this is like saying 'pick the optimum point' without saying anything about how to find the optimum point. This cannot be a law, it is the definition of optimum.

Note that optimum point need not be somewhere in the middle or 'inside', like a maxima. The optimum point could very well be on an extreme of the domain (input variables space).

zdc1 3 hours ago | parent | prev [-]

Counterpoint: "everything in moderation, including moderation"

ghm2180 8 hours ago | parent | prev | next [-]

This is doubly true in Machine Learning Engineering. Knowing what methods to avoid is just as important to know what might work well and why. Importantly a bunch of Data Science techniques — and I use data science in the sense of making critical team/org decisions — is also as important for which you should understand a bit of statistics not only data driven ML.

Silamoth 6 hours ago | parent [-]

Statistics is absolutely fundamental to data science. But I’m not sure this relates to the above idea of “laws” being internally contradictory?

diehunde 6 hours ago | parent | prev | next [-]

I guess that's why confirmation bias is also listed?

ericmcer 3 hours ago | parent | prev | next [-]

Most of them felt contradictory and kind of antiquated.

Reading through the list mostly made me feel sad. You can't help but interpret these through the modern lens of AI assisted coding. Then you wonder if learning and following (some) of these for the last 20 years is going to make you a janitor for a bunch of AI slop, or force you into a coding style where these rules are meaningless, or make you entirely irrelevant.

ChrisMarshallNY 5 hours ago | parent | prev [-]

Great point.

Sort of like a real code of law.