Remix.run Logo
neilv 5 days ago

I used to work heavily in this kind of system modeling (developing tools for it, and dogfooding), and still use it when high-value for figuring out or communicating an aspect of a system.

Here's a challenge, to help appreciate the nature of these: try to find an error in the diagrams.

It's usually harder than you might think, since, even when you know the notation and metamodel semantics, it's information-dense talking about a domain.

You usually have to know or learn a lot about the domain, and/or have an domain expert you can walk through it exhaustively, before you can find errors.

And an error can be whopper: a single graph edge missing, or between the wrong vertices, or with the wrong adornments can have huge implications.

For example, large amounts of work that have to be redone, or a project abandoned, or a mess that takes 10x longer than it should to write, and 10x the tech debt going forward with a bad architecture, or a fundamental security flaw.

One of the mistakes many people make is treating formalized diagrams as "marketecture" visuals, like they only need for handwaving sales presentation slides, where there's some kind of visual for every concept they want to be able to literally point to.

Nope, if you have software engineers and domain experts communicating and reasoning about your system in only the fuzzy terms of sales/exec presentations, you're really stabbing yourself in the face.

tinco 4 days ago | parent | next [-]

One of my more painful design mistakes happened in this sort of way when designing a system for recording inspections. I interviewed multiple inspectors and came up with a representation that was a little bit more elaborate than I would have hoped, but it at least captured all information I believed.

Then the company progressed and eventually we got to market fit and for two years the team and I were dealing with this increasingly burdensome complexity that we were not reaping any rewards of. Then one day we had enough and a colleague redesigned the system to ditch the extra complexity resulting in a much more elegant design that was easier to reason about.

That bliss continued for less than a year, until some customers asked for a particular report that we needed to generate based on a structure of the information that was now no longer present. We had to redesign it again, migrating was super painful and involved a temporary workaround that basically involved an extra branch on literally every piece of code that touched the inspection system.

In retrospect, I still don't know how I could have convinced the team that the complexity was something we needed when no customer required it for 3 years. Especially when the colleagues who took over that system from me had gained much more experience and expertise in the domain than I had since I had designed the original.

It would probably have been better if I had recorded the requirement that prompted the complexity, but had not actually implemented it as no customer had actual need for it at the time. Then we would not have had to deal with the complexity the first three years, and then evolved the product when the need arose.

molf 4 days ago | parent | next [-]

This seems like a business problem more than a design issue. Systems need to evolve alongside the business they support. Starting out with a simple design and evolving it over time to something more nuanced is a feature. Your colleague was right, and you were also right; except for the part where all nuances of the ideal solution need to be present on day 1.

The clients you have on day one are often very different from the ones you’ll have a few years in. Even if they’re the same organisations, their business, expectations, and tolerance for complexity likely have changed. And the volume of historical data can also be a factor.

A pattern I’ve seen repeatedly in practice: 1. A new system that addresses an urgent need feels refreshing, especially if it’s simple. 2. Over time (1, 3, 10 years? depending on industry), edge cases and gaps start appearing. Workarounds begin to pile up for scenarios the original system wasn’t built to handle. 3. Existing customers start expecting these workarounds to be replaced with proper solutions. Meanwhile, new customers (no longer the early adopter type) have less patience for rough edges.

The result is increasing complexity. If that complexity is handled well, the business scales and can support growing product demands.

If not… I'm sure many around here have experiences where that leads (to borrow Tolstoy: “All happy families are alike; each unhappy family is unhappy in its own way.”).

At the same time a market niche may open for a competitor that uses a simpler approach; goto step 1.

The flip side, and this is key: capturing all nuances on day 1 will cause complexity issues that most businesses at this stage are not equipped to handle yet. And this is why I believe it is mostly a business problem.

neilv 4 days ago | parent | prev [-]

Thinking aloud here...

I like your idea of capturing some requirement(s) that motivated the extra complexity, and retaining those requirements in a place they'll be seen and consulted when any new release planning and architecture happens.

This seems related something I do when scoping an MVP or release: work through the requirements breakdown, and mark the things we considered but decided not to do in that release (the "NON-Reqs"). Keeping them in the document gets everyone who looks at it up to speed, so we don't keep rehashing, and also makes it very clear to people that this thing someone told them the system would do is definitely not happening (a very-very-very common miscommunication in some companies).

But if a NON-Reqs suggests some future growth affordances that I think are important to try to include in the new architecture work now, to avoid/reduce very expensive refactoring/rewrites/rearchitecture in the near future, maybe some of those NON-Reqs should be moved to bulletpoints in a requirements section like "Architecture Growth Affordances", and become medium-priority requirements not to preclude in the architecture. Where it can be triaged, prioritized, and traced like any other requirements.

I like that idea a bit, but a few problems:

* Someone might get confused that you're promising to deliver that in a future release.

* Someone might blame if, when the future release planning comes, you say that feature will take a lot of work, but they thought you already did most of the work for it in the previous release.

* You'll need everyone involved to find the right balance of how much to think about about this release, and how much to keep in mind ability to do future releases. By default, people will have a lot of trouble with both kinds of thinking.

* A lot of these architectural decisions with growth in mind will happen after the requirements are locked in, and making frequent changes to that set of requirements is a very confusing precedent to set. (For example, most people won't take the commitment to the requirements seriously, if they think things can just be tossed into it later as needed, or if they think you're just doing incompetent theatre. You want them to get the idea "anything that isn't committed to in the requirements, isn't happening in the release; so everyone think and agree hard together now, so that we'll all be working effectively and efficiently towards the same viable release". Only after everyone gets that, do you introduce the flexibility.)

Maybe those requirements should go to a place for "anticipated future requirements" (but that isn't only a backlog of "candidate requirements"), in that it can be referenced for traceability when you make an architectural decision with future options in mind?

Or, the fallback (which I've used countless times) is to document with the architecture (embedded in the code, or in other canonical arch docs) the rationale for the decisions. Then, when someone goes to change the architecture, that information is hopefully somewhere they'll see. (This assumes that the person changing the architecture is a smart professional.)

motorest 4 days ago | parent | prev | next [-]

> Here's a challenge, to help appreciate the nature of these: try to find an error in the diagrams.

The point of diagrams is to communicate concepts and ideas. They are a design tool, not a design goal. They are used to help reason about your design. One of the most basic uses is to step through each of your requirements and verify if the design meets each and every single one of them.

> For example, large amounts of work that have to be redone, or a project abandoned (...)

No, this is not the diagram's fault. That's a problem caused by poor or even inexistent design. You only need to redo a project if you fail to meet critical requirements and you can't figure out a way to make them work with what you have. The diagrams only convey the output of your design process, and if it is faulty then it's not the diagram's fault.

pydry 4 days ago | parent | prev | next [-]

They're an awful substitute for code. About the best use for them is to give people an overview of the architecture of an existing system.

Even them theyre not great coz they tend to go out of date quite quickly and theyre quite expensive to build.

As a means of software design theyre BDUF crack - theyre an incitement to bad decisions in advance of writing software that would always be much better if done retrospectively via refactoring.

mdaniel 4 days ago | parent | prev [-]

> try to find an error in the diagrams.

Easy peasy, it's the 3rd blue line above the purple line /s

https://github.com/takaakit/uml-diagram-for-ddd-example-in-e...

I can't believe someone took time to generate such a thing, as if it is useful to anyone

rapnie 4 days ago | parent | next [-]

I found this vid extremely funny as I've been in the same position sort of: "draw seven stricly perpendicular lines" asked of the dev by the sales team.

https://www.youtube.com/watch?v=BKorP55Aqvg

serial_dev 4 days ago | parent | prev | next [-]

It’s a great demonstration how unrealistic it is to use it.

If it is this complicated for a demo project for the purposes of the book, in my opinion, it shows it’s completely inadequate for handling anything remotely real.

Project ongoing for five years, 10 devs on the team, each year two resigns and two joins, some good, some mediocre, can you imagine them mess this will be? And that’s not even a large project, it’s just an average but real project size.

KISS / YAGNI goes a long way.

takaakit 4 days ago | parent [-]

I also think it's bad to create this many diagrams at this level of detail, or try to keep the model updated, in an active software project. I think it's important to be selective—sometimes that even means choosing not to choose.

RealityVoid 4 days ago | parent | prev | next [-]

> I can't believe someone took time to generate such a thing, as if it is useful to anyone

Thanks for saying this. UML is beyond useless, almost nobody understands it, almost nobody does it right and almost nobody uses it when they need to (because it makes no sense to them).

motorest 4 days ago | parent [-]

> UML is beyond useless, almost nobody understands it, almost nobody does it right and almost nobody uses it when they need to (because it makes no sense to them).

I completely disagree. UML is one of the best tools to communicate system designs and behavior in a high-level language that is easy to understand. Like everything in software engineering, you can make things very hard to understand or you can focus on making things easy to understand, and if you do not take the time to familiarize yourself with the basics them everything sounds very complex and useless to you.

takaakit 4 days ago | parent | prev [-]

This directed graph is automatically generated by the plugin based on the UML structural elements (classifiers and relationships). Hope this helps.