Remix.run Logo
davnicwil 2 hours ago

With respect, I think this approach is actually harmful to everyone in the org because you're trying to twist reality to fit a premise that is just impossible to make true: that estimates of how long it takes to build software are reliable.

The reluctance to accept the reality that it cannot be made true achieves nothing positive for anybody. Rather it results in energy being lost to heat that could otherwise be used for productive work.

This isn't about respect between functions, this isn't about what ought to be professionally acceptable in the hypothetical. It's about accepting and working downstream of a situation based in objective truth.

Believe me, I wish it were true that software estimates could be made reliable. Everyone does. It would make everything involved in making and selling software easier. But, unfortunately, it's not easy. That's why so few organisations succeed at it.

I don't present easy answers to the tensions that arise from working downstream of this reality. Yes, it's easier to make deals contingent on firm delivery dates when selling. Yes, it's easier to plan marketing to concrete launch dates. Yes, it's easier to plan ahead when you have reliable timeframes for how long things take.

But, again unfortunately that is simply not the reality we live in. It is not easy. Flexibility, forward planning and working to where the puck is going to be, and accepting redundancy, lost work, or whatever if it never arrives there is part of it.

That I think is what people in different functions are best served rallying and collaborating around. One team, who build, market and sell software with the understanding that reliable estimates are not possible. There simply is no other way.

RaftPeople 2 hours ago | parent | next [-]

> you're trying to twist reality to fit a premise that is just impossible to make true: that estimates of how long it takes to build software are reliable.

It's not binary, it's a continuum.

With experience, it's possible to identify whether the new project or set of tasks is very similar to work done previously (possibly many times) or if it has substantial new territory with many unknowns.

The more similarity to past work, the higher the chance that reasonably accurate estimates can be created. More tasks in new territory increases unknowns and decreases estimate accuracy. Some people work in areas where new projects frequently are similar to previous projects, some people work in areas where that is not the case. I've worked in both.

Paying close attention to the patterns over the years and decades helps to improve the mapping of situation to estimate.

davnicwil an hour ago | parent [-]

Yes, but where reliability is concerned, a continuum is a problem. You can't say with any certainty where any given thing is on the continuum, or even define its bounds.

This is exactly what makes estimates categorically unreliable. The ones that aren't accurate will surprise you and mess things up.

In that sense, it does compress to being binary. To have a whole organisation work on the premise that estimates are reliable, they all have to be, at least within some pretty tight error bound (a small number of inaccuracies can be absorbed, but at some point the premise becomes de facto negated by inaccuracies).

nradov 2 hours ago | parent | prev | next [-]

Software estimates for projects that don't involve significant technical risk can be made reliable, with sufficient discipline. Not all teams have that level of discipline but I've seen existence proofs of it working well and consistently.

If you can't make firm delivery commitments to customers then they'll find someone who can. Losing customers, or not signing them in the first place, is the most harmful thing to everyone in the organization. Some engineers are oddly reluctant to accept that reality.

threatofrain an hour ago | parent [-]

That assumes you’re working in some kind of agency or consulting environment where you repeatedly produce similar or even distinct things. As opposed to a product company that has already produced and is humming along, which is when most people get hired.

Estimating the delivery of a product whose absence means zero product for the customer is very different. A company that’s already humming along can be slow on a feature and customers wouldn’t even know. A company that’s not already humming is still trying to persuade customers that they deserve to not die.

nradov an hour ago | parent [-]

Not at all. This can work fine in product development, as long as you limit the level of technical risk. On the other hand, if you're doing something really novel and aren't certain that it can work at all then making estimates is pointless. You have to treat it like a research program with periodic checkpoints to decide whether to continue / stop / pivot.

lucketone an hour ago | parent | prev | next [-]

There is an enterprise methodology that increases precision of project estimation.

1. Guess the order of magnitude of the task (hours vs days/months/years)

2. Add known planning overhead that is almost order of magnitude more.

Example: if we guess that task will take 30min, but actually it took 60min - that’s 100% error (30min error/30min estimate).

But if the methodology is used correctly, and we spend 2h in a planning meeting, same estimate and same actual completion time results in only 20% error, because we increased known and reliable part of the estimate (30min error / 2h30min estimate)

mixmastamyk 2 hours ago | parent | prev [-]

There’s no binary switch between estimable and not. Depends a lot on industry and novelty of work. Then estimates will be given in ranges and padded as needed by previous work. This gets a project into regularity.