Remix.run Logo
bpt3 3 days ago

Yes, people keep linking to the agile manifesto as if it's some sort of amulet protecting software developers from any sort of accountability or responsibility for their work product in a professional setting.

It seems like you acknowledge some amount of estimating is needed and I agree that there is an overemphasis on estimation in many places, but I'll ask you the same thing I asked others, which is:

How do you do either of the following without spending any time at all on estimates?

"Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale."

"At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly."

wpietri 3 days ago | parent | next [-]

I addressed the rest elsewhere, but done well a lack of estimates makes people more accountable. If I am shipping something every week (or as is common for my teams, more often), stakeholders can directly evaluate progress. There's no snowing them with paperwork and claims of progress against semi-fictional documents. They see what they see, they try it out, they watch people use it.

The reality of use is what we in software are ultimately accountable to, and that's what I am suggesting people optimize for. Closing that feedback loop early and often builds stakeholder trust such that they stop asking for elaborate fantasy plans and estimates.

bpt3 2 days ago | parent [-]

You replied to me in like 10 different places, nearly all of which are in responses to posts that weren't directed at you, so I'm trying not to fragment this discussion too much.

I will ask this here: If you are shipping code to production on a weekly basis, is that not a schedule, also known as a deadline for delivery?

If you expect to ship code to production every week, how do you know whether there will be something to ship without doing any estimation of the effort and time required?

wpietri 2 days ago | parent | next [-]

It is not a schedule, it's a standard. One I normally try to exceed. We ship when things are ready, which for my current team is ~2-3x/week, but in the past I've had teams that were faster.

We know that there will be things to ship because we try to break the work down into small units of deliverable value by focusing on seeking the highest-value things to do. Large requests are typically composed of a bunch of things of varying value, so we split them until we find things that advance the needs of the user, the customer, or the business. One that's often not intuitive to people is the business need for getting information about what's truly valuable to some part of the audience. So we'll often ship a small thing for a particular audience and see how they react and what actually gets used. (You can save an amazing amount of time by not building the things nobody would end up using.)

Sometimes we can't figure out how to break down something smaller enough that we have something to release right away. Or sometimes a chunk of work surprises us and it drags out. We avoid that, because compared to working on smaller things, it's much less comfortable for both developers and business stakeholders. But if it happens, it happens. We try to learn from it for the next time.

Regarding deadlines, we sometimes have them. Broadly, efforts break down into two categories: driven by date or driven by need. For the former, releasing early and often means we adjust scope to hit the date. For the latter, scope is primary and they get stuff when they get it. Either way because the business side sees steady improvement and has fine-grained control over what gets shipped, they feel in control.

This can be a learning experience for business stakeholders used to waterfall-ish, plan-driven approaches. But I have never once had somebody successfully work this way and want to go back. I have, however, had some product managers get thrown back into document-driven development and tell me how much they missed working like we did.

kragen 2 days ago | parent | prev [-]

No, shipping code to production on a weekly basis is not a deadline. A deadline is a time by which a task must be completed. A task is something like "fix bug 3831" or "allow users to log in with OAuth2". "Ship code" is not, in any useful sense, a task.

Such "timeboxed iterations" can indeed result in "shipping" a null update. Unless you have a time-consuming QA gate to pass, that's not very likely, especially on a team containing several people, but it can happen. You don't know that you will have "something" to ship.

Typically we try to break changes down into shippable tasks that can be done in under a day, so the expected number of tasks completed by a four-programmer team in a week is on the order of 30, or 15 if you're pairing. For this to fall all the way to 0, everybody has to be spending their time on things that could not be thus broken down. It's pretty unlikely to happen by chance. But sometimes a difficult bug or two really is the thing to focus on.

In XP, estimates are used for prioritizing which tasks to work on and which tasks to break down into smaller tasks. The "product owner" is supposed to choose the tasks that have the most user value for the estimated cost. But those estimates aren't commitments in any sense; they're guesses. Sometimes tasks take more time than estimated; other times, they take less. This is the reason for the shift to estimating in "story points": to prevent the estimates from being interpreted as referring to a period of time.

If someone in your organization is interpreting estimates as commitments, this can have a corrosive effect on the accuracy of those estimates, because estimators respond by padding their estimates, in inconsistent ways. Often this destroys the usefulness of the whole iteration planning process, because the project owner no longer knows which tasks are the easiest ones and thus worth doing even if the benefit is marginal. Organizations can recover from this pathology in a variety of ways, but often they don't. Eliminating estimation is one response that sometimes works.

wpietri 2 days ago | parent | next [-]

Yes, this sounds very familiar to me. I started with dates and went to story points used to estimate dates. Then as we turned up the release cadence, we eventually dropped estimating altogether, even in points, because it wasn't really helping anything.

That doesn't mean we refuse to solve the problems that estimates are used to solve. E.g., "Can we have something by the big conference," or "Can we keep our customers happy." We just solve them other ways.

And totally agreed about the corrosive effect of treating estimates as commitments. It's such a negative-sum game, but people play it all the time.

bpt3 2 days ago | parent | prev [-]

I already said this to your fellow interlocutor who is also responding to nearly every comment of mine with the same thought process, but I'll say it here as well in different terms:

The product owners, customers, salespeople, supervisors, peers, etc. you interact with as part of the software development process on any project outside of a personal hobby don't care about your semantic games.

If functionality is needed in an application, and they ask you to implement it, and you agree, there is no real-world scenario where they just say "Cool, I'll sit idly by while you work at this until you declare it ready, and then and only then will I let anyone else know about it or take any action on its supposed existence," and repeat that for every piece of functionality you implement in perpetuity.

And if you keep failing to deliver required functionality over time, no one is going to accept your arguments that: "Oh sorry, our weekly deliveries to production aren't a deadline, it's a timeboxed iteration", "Oh that estimate wasn't a commitment to do anything, we work on our own schedule", and so on.

Yes, the relationship between developers and "other stakeholders" can turn toxic, but in most organizations the developers don't have much power, probably due to repeated attempts to play the games you've laid out above. The way to combat that is to be reliable and professional so your team has the authority to stand their ground on the difficulty of a given task, not effectively refuse to participate in what is a completely reasonable conversation about the relationship between your work and the objectives of the organization.

mpyne 2 days ago | parent | prev [-]

> How do you do either of the following without spending any time at all on estimates?

> "Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale."

This is 'just' bum-standard continuous delivery (which is where most organizations should be heading). You pull the next todo from the backlog, start working on it. If it takes more than a day to commit something, you split the task into something smaller.

You don't need to estimate ahead of time at all as long as the task is small enough, all you need is to be able to put the near-term backlog of work into a good priority order of business value.

If the high-value task was small it doesn't prevent you from doing more work, because the next unit of work to do is the same either way (the next item on the backlog).

If the high-value task was too big, it can cause you to take a pause to reflect on whether you scoped the task properly and if it is still high-value, but an estimate wouldn't have saved you from it because if you'd truly understood the work ahead of time you wouldn't be pausing to reflect. An estimate, had you performed it, would not have changed the priority.

But this Kanban-style process can be performed without estimates at all, and organizations that work to setup an appropriate context for this will find that they get faster delivery of value than trying to shoehorn delivery into prior estimates instead. But there are people who work faster with the fire of a deadline under their tail so I can't say it's unilaterally better.

> "At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly."

If it's hard to do the work as a team, you should be able to tell it was hard retrospectively, with or without having done an estimate ahead of time.

You might say that failing to hit your prior schedule estimates would be a good topic to discuss at a retrospective session, but I would tell you that this is a self-licking ice cream cone. If your customers are happy despite missing internal schedule estimates you're in a good spot, and if your customers are unhappy even though you're "hitting schedule projections" you're in a bad spot.

There's a lot more productive discussions to be done when the team reflects on how things are going, and they typically relate to identifying and addressing obstacles to the continuous flow of value from the product team to the end users.