Remix.run Logo
anonymousiam 5 hours ago

The main point that I did not see mentioned in this piece is that Deming should only be applied to MANUFACTURING environments, because things like engineering are too chaotic to identify processes or trends in the engineering itself, and trying to control those engineering processes with SPC doesn't really improve the quality of the engineering, it just adds stress, makes things take longer, and probably lowers the quality of the thing that is being engineered.

Obviously, if a quality issue is detected in manufacturing, there may be some steps that engineering could take to improve the manufacturing process and make things stable enough to obtain meaningful statistics. This is part of the Deming feedback process, and part of the System Engineering Life Cycle.

kqr 2 hours ago | parent | next [-]

I think you're confusing Deming with statistical process control.

It is true that SPC works best for the non-chaotic parts of product development and manufacturing alike. There are parts of product development that are non-chaotic, and SPC works just fine there, too.

In addition to SPC, Deming had strong opinions on how organisations ought to work and these are relevant also for product development. These are things like

- Understand the underlying customer need.

- The leaders shape the output of the organisation by shaping its processes.

- It is cheaper and faster to build quality and security into the product from the start instead of trying to put it in at the end.

- Close collaboration with suppliers can benefit both parties.

- Have leaders skilled in whatever their direct reports are doing. Use them as coaches normally and as spare workers in times of high demand.

- Collaborate across parts of the organisation instead of throwing things over walls.

- Don't just tell people to do better. Show them how they can do better. Give them the knowledge, tools, and authority they need to do better.

These are just as relevant for product development as for manufacturing. If anything, even more so, thanks to the chaotic nature of product development.

ako 4 hours ago | parent | prev | next [-]

I think Donald G. Reinertsen did a good job in his books applying Deming to the design process.

kqr an hour ago | parent | next [-]

Reinertsen has borrowed more from queueing theory than from Deming. This is not unexpected -- Deming worked mainly with thin-tailed statistics, whereas Reinertsen applied his knowledge to the power laws that show up more in design and development work.

(The two approaches meet in the middle. Deming inspired lean manufacturing which also applies queueing theory. The latter has convenient results both for thin and thick tailed processes.)

regularfry 2 hours ago | parent | prev [-]

The chief problem I have with Reinertsen (and it's not his fault, at all) is how difficult it is to get people to buy in to the idea that cost of delay exists, let alone buy in to measuring it.

sinnsro 3 hours ago | parent | prev | next [-]

The core issue with the article is that author mixes up bad management and "fog of management" with the fact that financial results have a disproportionate amount of influence in how things are organised. Every team and employee should do their part to contribute to the financial targets every quarter and within the fiscal year. Which clashes with Deming's points 11b and 12b [1].

_________

1. https://deming.org/explore/fourteen-points/

estearum an hour ago | parent | next [-]

The problem is that "every team and employee doing their part to contribute to financial targets", as-stated, is liable to produce suboptimization.

A person on the assembly line can "contribute to financial targets" taking a shortcut, reducing their local spend, but which emerges as a much more expensive problem down the road.

So it's true that every employee should do their part to contribute to financial targets, but defining "their part" is the hard part, something only management can do, and that MBO obscures and tries to make as simple as waterfalling the goal from above.

ffsm8 3 hours ago | parent | prev [-]

> Every team and employee should do their part to contribute to the financial targets every quarter and within the fiscal year

The inevitable result of this is however the devaluation of the future. Eg if the statement was true, it'd be the R&D workers responsibility to hand in their resignation ( or their managers layoffs) if their product won't get paying customers within the same fiscal year... And the same applies to any other long term expenditure/investment that company might be considering. E g building a new fab/production line etc pp

So no, that statement of yours is not actually true. It should not be entirely ignored, but it should not become a leading cause unless you want to run the company in the ground.

sinnsro an hour ago | parent [-]

The statement holds true for a broad set of companies and management styles. I speak from personal experience: the wrong incentives are always there, and they run counter to many things listed by Deming. The obsession with "financial impact" is there with varying degrees, even in functions where it is hard to quantify said impact.

It might not apply to R&D-heavy companies, but we do see engineering companies pivoting into more finance-oriented management. Boeing is one such case and look at the damage.

ignoramous 4 hours ago | parent | prev | next [-]

> trying to control those engineering processes with SPC doesn't really improve the quality of the engineering, it just adds stress, makes things take longer, and probably lowers the quality of the thing that is being engineered

Totally depends on the scale. For pizza-sized times with a neighbourhood pizza shop sized impact, sure. Large scale projects without controls & feedback loops in place will fall apart; see: Scaling teams: https://archive.is/FQKJH

If you'd follow some medium to large scale projects (like Go / Chromium), the value of processes & quality control, even if it may seem at the expense of velocity, becomes clear.

  The great insight of Deming's methods is that you can (mostly) identify the difference between common and special causes mathematically, and that you should not attempt to fix common causes directly - it's a waste of time, because all real-life processes have random variation.

  Instead, what you want to do is identify your mean and standard deviation, plot the distribution, and try to clean it up. Rather than poking around at the weird edges of the distribution, can we adjust the mean left or right to get more output closer to what we want? Can we reduce the overall standard deviation - not any one outlier - by changing something fundamental about the process?

  As part of that, you might find out that your process is actually not in control at all, and most of your problems are "special" causes. This means you're overdriving your process. For example, software developers working super long hours to meet a deadline might produce bursts of high producitivity followed by indeterminate periods of collapse (or they quit and the whole thing shuts down, or whatever). Running them at a reasonable rate might give worse short-term results, but more predictable results over time, and predictable results are where quality comes from.
https://apenwarr.ca/log/20161226

Distributed systems is also a way to be throughly humbled by complexity: https://fly.io/blog/corrosion/

mobilejdral an hour ago | parent | prev [-]

Having worked on software that runs manufacturing plants your comment echos the idea that too many engineers have that they are "better" than manufacturing and lessons don't apply to them.

Go back to your desk and work on a PR that is going to go through a 20 step process that is constantly changing before a hopefully semi-regular release goes out to customers and tell me how you ignoring all of knowledge on how to do this well is good for your career.

For a long time I assumed folks like you were simply uneducated, but know I see it for what it is, elitism.