| ▲ | notjustanymike 5 hours ago |
| After owning a product, I've developed a lot of sympathy for the people outside of engineering who have to put up with us. Engineers love to push back on estimates, believing that "when it's done" is somehow acceptable for the rest of the business to function. In a functioning org, there are lot of professionals depending on correct estimation to do their job. For us, an accurate delivery date on a 6 month project was mandatory. CX needed it so they could start onboarding high priority customers. Marketing needed it so they could plan advertising collateral and make promises at conventions. Product needed it to understand what the Q3 roadmap should contain. Sales needed it to close deals. I was fortunate to work in a business where I respected the heads of these departments, which believe it or not, should be the norm. The challenge wasn't estimation - it's quite doable to break a large project down into a series of sprints (basically a sprint / waterfall hybrid). Delays usually came from unexpected sources, like reacting to a must have interruption or critical bugs. Those you cannot estimate for, but you can collaborate on a solution. Trim features, push date, bring in extra help, or crunch. Whatever the decision, making sure to work with the other departments as colaborators was always beneficial. |
|
| ▲ | davnicwil 2 hours ago | parent | next [-] |
| With respect, I think this approach is actually harmful to everyone in the org because you're trying to twist reality to fit a premise that is just impossible to make true: that estimates of how long it takes to build software are reliable. The reluctance to accept the reality that it cannot be made true achieves nothing positive for anybody. Rather it results in energy being lost to heat that could otherwise be used for productive work. This isn't about respect between functions, this isn't about what ought to be professionally acceptable in the hypothetical. It's about accepting and working downstream of a situation based in objective truth. Believe me, I wish it were true that software estimates could be made reliable. Everyone does. It would make everything involved in making and selling software easier. But, unfortunately, it's not easy. That's why so few organisations succeed at it. I don't present easy answers to the tensions that arise from working downstream of this reality. Yes, it's easier to make deals contingent on firm delivery dates when selling. Yes, it's easier to plan marketing to concrete launch dates. Yes, it's easier to plan ahead when you have reliable timeframes for how long things take. But, again unfortunately that is simply not the reality we live in. It is not easy. Flexibility, forward planning and working to where the puck is going to be, and accepting redundancy, lost work, or whatever if it never arrives there is part of it. That I think is what people in different functions are best served rallying and collaborating around. One team, who build, market and sell software with the understanding that reliable estimates are not possible. There simply is no other way. |
| |
| ▲ | RaftPeople 2 hours ago | parent | next [-] | | > you're trying to twist reality to fit a premise that is just impossible to make true: that estimates of how long it takes to build software are reliable. It's not binary, it's a continuum. With experience, it's possible to identify whether the new project or set of tasks is very similar to work done previously (possibly many times) or if it has substantial new territory with many unknowns. The more similarity to past work, the higher the chance that reasonably accurate estimates can be created. More tasks in new territory increases unknowns and decreases estimate accuracy. Some people work in areas where new projects frequently are similar to previous projects, some people work in areas where that is not the case. I've worked in both. Paying close attention to the patterns over the years and decades helps to improve the mapping of situation to estimate. | | |
| ▲ | davnicwil an hour ago | parent [-] | | Yes, but where reliability is concerned, a continuum is a problem. You can't say with any certainty where any given thing is on the continuum, or even define its bounds. This is exactly what makes estimates categorically unreliable. The ones that aren't accurate will surprise you and mess things up. In that sense, it does compress to being binary. To have a whole organisation work on the premise that estimates are reliable, they all have to be, at least within some pretty tight error bound (a small number of inaccuracies can be absorbed, but at some point the premise becomes de facto negated by inaccuracies). |
| |
| ▲ | nradov 2 hours ago | parent | prev | next [-] | | Software estimates for projects that don't involve significant technical risk can be made reliable, with sufficient discipline. Not all teams have that level of discipline but I've seen existence proofs of it working well and consistently. If you can't make firm delivery commitments to customers then they'll find someone who can. Losing customers, or not signing them in the first place, is the most harmful thing to everyone in the organization. Some engineers are oddly reluctant to accept that reality. | | |
| ▲ | threatofrain an hour ago | parent [-] | | That assumes you’re working in some kind of agency or consulting environment where you repeatedly produce similar or even distinct things. As opposed to a product company that has already produced and is humming along, which is when most people get hired. Estimating the delivery of a product whose absence means zero product for the customer is very different. A company that’s already humming along can be slow on a feature and customers wouldn’t even know. A company that’s not already humming is still trying to persuade customers that they deserve to not die. | | |
| ▲ | nradov an hour ago | parent [-] | | Not at all. This can work fine in product development, as long as you limit the level of technical risk. On the other hand, if you're doing something really novel and aren't certain that it can work at all then making estimates is pointless. You have to treat it like a research program with periodic checkpoints to decide whether to continue / stop / pivot. |
|
| |
| ▲ | lucketone an hour ago | parent | prev | next [-] | | There is an enterprise methodology that increases precision of project estimation. 1. Guess the order of magnitude of the task (hours vs days/months/years) 2. Add known planning overhead that is almost order of magnitude more. Example: if we guess that task will take 30min, but actually it took 60min - that’s 100% error (30min error/30min estimate). But if the methodology is used correctly, and we spend 2h in a planning meeting, same estimate and same actual completion time results in only 20% error, because we increased known and reliable part of the estimate
(30min error / 2h30min estimate) | |
| ▲ | mixmastamyk 2 hours ago | parent | prev [-] | | There’s no binary switch between estimable and not. Depends a lot on industry and novelty of work. Then estimates will be given in ranges and padded as needed by previous work. This gets a project into regularity. |
|
|
| ▲ | doix 5 hours ago | parent | prev | next [-] |
| I used to work in the semiconductor industry writing internal tools for the company. Hardware very rarely missed a deadline and software was run the same way. Things rarely went to plan, but as soon as any blip occured, there'd be plans to trim scope, crunch more, or push the date with many months of notice. Then I joined my first web SaaS startup and I think we didn't hit a single deadline in the entire time I worked there. Everyone thought that was fine and normal. Interestingly enough, I'm not convinced that's why we failed, but it was a huge culture shock. |
| |
| ▲ | thebruce87m 4 hours ago | parent | next [-] | | > I used to work in the semiconductor industry writing internal tools for the company. Hardware very rarely missed a deadline and software was run the same way. Former Test Engineer here. It was always fun when everyone else’s deadline slipped but ours stayed the same. Had to still ship on the same date even if I didn’t have silicon until much later than originally planned. | | | |
| ▲ | ozim 3 hours ago | parent | prev | next [-] | | What was the thing you were estimating? R&D? I think you were estimating time to build things that were out of R&D and you had specifications that were actual specifications you were building up to. In SaaS my experience is: someone makes up an idea not having any clue how existing software is working or is laid out, has no specifications beside vague not organized bunch of sentences. Software development team basically starts R&D to find out specifications and what is possible - but is expected to deliver final product. | | |
| ▲ | ferguess_k 2 hours ago | parent [-] | | I had the same experience when doing an exercise implementing `mmap` for `xv6` -- that was the last lab. There was no specification except for a test file. Passing that test file is relatively easy and I could game it. I consulted the manpage of `mmap` but it is pretty far from a specification, so eventually I had to write a lot of tests in Linux to figure out what it can do and what it can't do (what happens when I over-mmap? what happens when I write back pass EOF? etc.), and write the same tests for `xv6` so that I could test my implementation. Not sure about hardware, but it is really hard to get a clear specification for software. |
| |
| ▲ | xtreme 2 hours ago | parent | prev [-] | | This aligns with my experience in the semi industry. SWEs tend to see trimming scope as moving the goalpost and do not consider as an option. Providing advance notice is mostly about client management, and clients are often surprisingly receptive to partial solutions. |
|
|
| ▲ | CuriouslyC 4 hours ago | parent | prev | next [-] |
| This is true, but the problem is that engineers are being asked to over-extrapolate given the evidence, and expected to own that extrapolation despite the paucity of evidence to make a good estimate. I *HATE* estimating roadmaps, because it feels unfair. I'm happy to estimate a sprint. |
| |
| ▲ | fallinditch 3 hours ago | parent | next [-] | | Yes. I took over the project management of a job where the previous project manager had spent a year planning it out, but development had not yet started. The client was furious, understandably. I abandoned the plans from the previous PM and discussed the job with the developer who ballpark estimated that the work would take 2 months. After a quick analysis I adjusted this to 14 weeks. But the account manager thought this sounded too long and insisted that we plug everything in to a Gantt chart, define the shit out of everything, map the dependencies, etc, which showed that the development would only take 6 weeks. The project ended up taking 14 weeks. | |
| ▲ | state_less 2 hours ago | parent | prev | next [-] | | In another life, I would do things like measure the cost in developer time of bugs making it into developer repos vs. the cost in time of running tests in CI to catch such bugs, so evidence based decision making. It was mostly ignored, and at first I was surprised. A multi million dollar organization of people making negative EV plays, which I chalked up to the political pressures being more important than the wastage. More on that later. As far as estimates go, I've also struggled with the industries cult(ural) rituals. I tried to put forward a Gaussian based approach that took into account not only the estimate of time, but the expected uncertainty, which is still probably off the mark, but at least attempts to measure some of the variance. But again, the politics and the rigidity of the clergy that has built around software development blocked it. On the bright side, all this has helped me in my own development and when I think about software development and estimating projects. I know that outcomes become more chaotic as the number of pieces and steps compound in a project (i.e. the projects normal curve widens). You may not even get the project at all as defined at the outset, so my normals approach is still not quite the right tool. I think this kind of thinking can be helpful when working solo or in a small group who are exposed to market forces. But for solo and small groups, the challenge isn't so much about the estimates, it's about how you're going to fight a battalion of mercenaries hired by big VC money and Big Tech. They can often afford to be inefficient, dump in the market, because their strategy is built around market control. These aren't practices small players can afford, so you need to get creative, and try to avoid these market participant kill boxes. And this is why, coming back to my earlier point, that often times, inefficient practices and politics plays a big role. Their trying to marshal a large number of troops into position and can afford to lose a few battles in order to win the war. The big money plays by a different set of rules, so don't worry if their doing it wrong. Just recognize your in the army soldier! | | |
| ▲ | nradov an hour ago | parent [-] | | It's sad how software organizations refuse to learn from history. The US Navy was using PERT to manage huge, risky projects back in the 1950s with pretty good results. It can give you a Gaussian distribution of project completion dates based on best / middle / worst case estimates for individual tasks with dependencies. https://en.wikipedia.org/wiki/Program_evaluation_and_review_... |
| |
| ▲ | SpicyLemonZest 3 hours ago | parent | prev | next [-] | | It's definitely unfair in a sense. But companies that make over-extrapolated roadmap estimates from not enough evidence systematically outcompete those who don't, because their customers greatly prefer companies who give a date and then try their best to hit it over companies who say they don't know when the product will be ready for X and you'll just have to wait and see. | | |
| ▲ | CuriouslyC 2 hours ago | parent [-] | | I get that, and I don't mind giving guidance on roadmaps, it's just the ownership when stuff outside my control goes wrong that bothers me. I shouldn't be responsible for product going in circles on little details with the customer causing req churn, yet I have been held accountable for missing estimates under that exact circumstance. |
| |
| ▲ | plagiarist 4 hours ago | parent | prev [-] | | You estimate your best and then during the project the people who keep changing the spec every two weeks ask why the deadline is slipping. |
|
|
| ▲ | bluGill 4 hours ago | parent | prev | next [-] |
| > Trim features, push date, bring in extra help, or crunch. There are problems with all of these. The company knows they can sell X of the product for $Y (often X is a bad guess, but sometimes it has statistical range - I'll ignore this for space reasons but it is important!). X times Y equals gross profit. If the total costs to make the feature are too high the whole shouldn't be done. If you trim features - the affects either the number you can sell, or the price you can sell for (sometimes both). If you push the date that also affects things - some will buy from a competitor (if possible - and the later date makes it more likely the competitors releases with that feature). Bring in extra help means the total costs goes up. And worse if you bring them in too late that will slow down the delivery. Crunch is easiest - but that burns out your people and so is often a bad answer long term. This is why COMPANIES NEED ACCURATE ESTIMATES. They are not optional to running a company. That they are impossible does not change the need. We pretend they are possible because you cannot run a company without - and mostly we get by. However they are a fundamental requirement. |
| |
| ▲ | moregrist 3 hours ago | parent | next [-] | | > This is why COMPANIES NEED ACCURATE ESTIMATES. They are not optional to running a company. Sure, but even accurate estimates are only accurate as long as the assumptions hold. Market conditions change, emergency requests happen, people leave, vendor promises turn out to be less than accurate. And most estimates for non-routine work involve some amount of risk (R&D risk, customer risk, etc.). So pounding the table and insisting on ACCURATE ESTIMATES without a realistic backup plan isn’t good business, it’s just pushing the blame onto the SWE team when (not if) something goes south. | |
| ▲ | praptak 3 hours ago | parent | prev | next [-] | | If your business model needs the impossible then it's a bad business model. If your margins are too thin to absorb the schedule uncertainty then don't produce software. Alternatively treat it like a bet and accept it may not pay off, just like any other business where uncertainty is the norm (movies, books, music). | |
| ▲ | Phlebsy 3 hours ago | parent | prev | next [-] | | I would settle for accurate estimates being a requirement if sticking to the estimate and allocations is as well. Every project I've been a part of that has run over on timeline or budget had somebody needling away at resources or scope in some way. If you need accuracy to be viable, then the organization cannot undermine the things that make it possible to stay on track. | | |
| ▲ | kakacik 2 hours ago | parent [-] | | Also, if you need accuracy stay away from questionable vendors of 3rd party products, as much as possible since they are chaos generators on any project involved. In my work we have our core banking system designed in 80s on top of Oracle DB so everything is just boxes around it, with corresponding flexibility towards modern development methodologies. The complexity of just doing a trimmed copy of production servers for say user acceptance test phase is quite something, connecting and syncing to hundreds of internal systems. Needless to say estimates vs reality have been swinging wildly in all directions since forever. The processes, red tape, regulations and politics are consistently extreme so from software dev perspective its a very lengthy process while actual code changes take absolutely tiny time in whole project. |
| |
| ▲ | nightski 4 hours ago | parent | prev | next [-] | | Companies need accurate estimates like I need accurate stock market forecasts. | |
| ▲ | jungturk 3 hours ago | parent | prev [-] | | They don't NEED them, but better project estimates can reduce the error bars on other dependent estimates (e.g. estimated sales, estimated ship dates, estimated staffing requirements, etc...), and that might be useful to a business (or not). |
|
|
| ▲ | nine_k 3 hours ago | parent | prev | next [-] |
| Yes, the key part of estimation is not that we need to say how large must be the (time) box to contain the project, but rather how much of a project can we pack into a box no larger than what the business could bear. Hence the separation into must-haves, highly desirable, and nice-to-haves. Hence the need for modularity and extensibility: you if don't get to build everything in one go, and can't always even predict what parts would be left outside the scope, you have more of a lego-like structure. BTW maybe if we finally shook off the polite lie of planning how much work a project could be, and instead started to think in terms of possible deliverables within different time frames, the conversation would become saner. |
|
| ▲ | lubujackson 2 hours ago | parent | prev | next [-] |
| I agree whole-heartedly with the source article as well as this comment. The point is that the work of estimation is most of the work. We can have better estimates if we break things down to bite-sized chunks, but "when will this be done" is largely impossible and comes down to many external factors. Laypeople understand this implicitly in other contexts. My favorite metaphor is building something like a new shopping mall. If you ask for an estimate you first need to architect the entire thing. This is equivalent to breaking down the task into sprints. In most companies the entire architecture phase is given very little value, which is insane to me. Once we have our blueprints, we have other stakeholders, which is where things really go off the rails. For the mall, maybe there is an issue with a falcon that lives on the land and now we need to move the building site, or the fixtures we ordered will take 3 extra months to be delivered. This is the political part of estimating software and depends a lot on the org itself. Then, finally building. This is the easy part if we cleared the precursor work. Things can still go wrong: oops we hit bedrock, oops a fire broke out, oos the design wasn't quite right, oops we actually want to change the plan. But yes, estimates are important to businesses. But businesses have a responsibility to compartmentalize the difference. Get me to a fully ticketed and approved epic and most engineers can give you a pretty good estimate. That is what businesses want, but they consider the necessary work when they Slack you "how long to build a mall?" |
| |
| ▲ | mikepurvis an hour ago | parent [-] | | I've also seen it argued that real world estimates for things like construction projects are so good because 99% of it is do-overs from similar projects in the past; everyone knows what it takes to pour a column or frame a floor or hang a beam. Whereas with software most of what was done previously is now an import statement so up to 80-100% of the project is the novel stuff. Skilled leaders/teams know to direct upfront effort toward exploring the least understood parts of the plan to help reduce down-stream risk but to really benefit from that instinct the project plan has to regularly incorporating its findings. | | |
| ▲ | nradov 42 minutes ago | parent [-] | | Real world estimates for construction projects are often way off. Especially for remodeling or renovation of older buildings, where the most serious problems can remain hidden until you get into the demolition phase. |
|
|
|
| ▲ | chrisfosterelli 4 hours ago | parent | prev | next [-] |
| I agree. Software engineering is basically the only industry that pretends this is professionally acceptable. Imagine if government staff asked when a bridge would be done or how much it would cost and the lead engineer just said "it's impossible to estimate accurately, so we wont. It's a big project tho". Estimating in software is very hard, but that's not a good reason to give up on getting better at it |
| |
| ▲ | raincole 4 hours ago | parent | next [-] | | Government contractor's estimation is based on what number is politically acceptable, not how much the project would realistically take. 90% of public projects were overbudget [0]. But you're pretty spot on, as 'professionally acceptable' indeed means politically acceptable most of the time. Being honest and admitting one's limit is often unacceptable. [0]: https://www.strategy-business.com/article/Why-do-large-proje... | | |
| ▲ | chrisfosterelli 4 hours ago | parent [-] | | Yes, my claim is absolutely not that they're good at it haha. Estimation is a real problem in a lot of industries, including ours, and I think that's probably common ground here -- I suppose my differing position is that I think the solution is to get better at it, not to refuse to do it. I've been on projects where I've seen the budget explode and projects where I've seen the budget kept tight and on track. The latter is very hard and requires effort from ALL sides to work, but it's almost always achievable. I actually empathize a little bit more with megaprojects because generally the larger the budget the harder it will be to keep on track in my experience. Most estimates we're asked to give in our day jobs are not even multi-million dollar estimates. Also I'm using budget and estimate interchangeably but these are of course different things -- that's one of my nitpicks is that we often treat these as the same thing when we talk about estimating being hard. A lot of individual estimates can be very wrong without affecting the ultimate budget. |
| |
| ▲ | AlotOfReading 4 hours ago | parent | prev | next [-] | | Contractor estimates are just as prone to schedule slippage and cost overruns as anything estimated by software engineers. I doubt anyone's ever argued that giving wrong estimates is hard or impossible. Only that approximately correct ones are, and other industries seem to struggle with that just as much as software. Authors don't finish books by deadlines, so fans are left in the cold. Tunnels take twice as long and cost twice as much. Renovations take a year instead of 3 months and empty your bank account. Saying "I don't know" is arguably more honest, even if it's not useful for budgets or planning. | | |
| ▲ | chrisfosterelli 4 hours ago | parent [-] | | > Contractor estimates are just as prone to schedule slippage and cost overruns as anything estimated by software engineers I completely agree. That's why I chose that example: They're also awful at it, especially these days in North America in particular. But any contractor that tried to put in a bid claiming "it'll be done when it's done and cost what it costs" would not be considered professionally competent enough to award a multi-million dollar budget. |
| |
| ▲ | numitus 32 minutes ago | parent | prev | next [-] | | Incorrect analogy. Bridge construction is a clearly algorithmic process. All bridges resemble each other, and from an engineering perspective, designing one is not rocket science. Construction itself is a set of well-studied steps that can be easily calculated. If I were to write my operating system 100 times, I could give an estimate accurate to within 10%, but every task I’ve ever done in life is unique, and I have nothing to compare it to except intuitive judgments. Returning to bridges: there is 1% of projects that are unique, and their design can take decades, while construction might not even begin | |
| ▲ | adrianN 4 hours ago | parent | prev | next [-] | | When the government asks how much project X costs they find ten companies that promise the moon and then deliver a wheel of cheese for five times the estimated cost. | |
| ▲ | masterj 2 hours ago | parent | prev | next [-] | | They miss estimates all the time though? It’s an observable fact There is a bridge in my town that is finally nearing completion, hopefully, this year. It was estimated to be completed 2 years ago. This changes when it’s a project that has fewer unknowns, where they’ve built the same thing several times before. The same is true in software. | |
| ▲ | piyuv 4 hours ago | parent | prev | next [-] | | Not a good analogy. Once you build a bridge, it’s done. Software nowadays is never “done”, and requirements constantly change. It’s more akin to building a rope bridge and trying to upgrade it to accommodate cars while it’s in active use. | | |
| ▲ | PxldLtd 4 hours ago | parent | next [-] | | Sounds like you don't have a good process for handling scope changes. I should know, the place I'm at now it's lacklustre and it makes the job a lot harder. Usually management backs off if they have a good understanding of the impact a change will make. I can only give a good estimate of impact if I have a solid grip on the current scope of work and deadlines. I've found management to be super reasonable when they actually understand the cost of a feature change. When there's clear communication and management decides a change is important to the product then great, we have a clear timeline of scope drift and we can review if our team's ever pulled up on delays. | | |
| ▲ | FINDarkside 3 hours ago | parent [-] | | I feel like some people in this thread are talking about estimates and some are talking about deadlines. Of course we should be able to give estimates. No, they're probably not very accurate. In many industries it makes sense to do whatever necessary to meet the estimate which has become a deadline. While we could do that in software, there often isn't any ramifications of going a bit overtime and producing much more value. Obviously this doesn't apply to all software. Like gamedev, especially before digital distribution. I think it's obvious that all software teams do some kind of estimates, because it's needed for prioritization. Giving out exact dates as estimates/deadlines is often completely unecessary. | | |
| ▲ | icedchai an hour ago | parent [-] | | The real problem is software teams being given deadlines without being consulted about any sort of estimates. "This needs to be done in 60 days." Then we begin trading features for time and the customer winds up getting a barely functioning MVP, just so we can say we made the deadline and fix all the problems in phase 2. | | |
| ▲ | nradov 39 minutes ago | parent [-] | | OK, so that sounds fine. Software delivers value to customers when it's better than nothing some of the time. Even if it barely functions then they're probably happier with having it than not, and may be willing to fund improvements. |
|
|
| |
| ▲ | chrisfosterelli 4 hours ago | parent | prev [-] | | When customers ask when feature X will be ready, they sure have an idea of done in their mind. | | |
| ▲ | nradov 36 minutes ago | parent [-] | | Sure, so extract the customer's definition of done as part of requirements analysis process and write it down. Get them to agree in writing, including the explicit exclusion of other things that aren't part of their idea of done. |
|
| |
| ▲ | bsoles 3 hours ago | parent | prev [-] | | Ever heard of Big Dig in Boston, for example? Or the Joint Strike Fighter? Estimations in government contracts are as ridiculous as in software. They just pretend to be able to estimate when things will be done, when, in fact, the contractors are as clueless. Not being able to say "it is impossible to estimate", does not mean your estimate will be correct. That estimation is usually a lie. |
|
|
| ▲ | subprotocol 2 hours ago | parent | prev | next [-] |
| I think the hardest part of estimation often gets glossed over: genuine technical unknowns. Not "we didn’t think hard enough," but cases where the work itself is exploratory. |
|
| ▲ | analog31 4 hours ago | parent | prev | next [-] |
| >>>> In a functioning org, there are lot of professionals depending on correct estimation to do their job. A side effect is, no there aren't. Allow me to explain that catty remark. The experienced pro's have figured out how to arrange their affairs so that delivery of software doesn't matter, i.e., is someone else's problem. The software either arrives or it doesn't. For instance, my job is in technology development for "hardware" that depends on elaborate support software. I make sure that the hardware I'm working on has an API that I can code against to run the tests that I need. My department has gone all-in on vibe coding. Customers aren't waiting because the mantra of all users is: "Never change anything," and they can demand continued support of the old software. New hardware with old software counts as "revenue" so the managers are happy. |
|
| ▲ | nradov an hour ago | parent | prev | next [-] |
| The most effective approach that I've found to prevent delays in large scale software projects is to carve out a dedicated team to deal with critical bugs, L3 support tickets, and urgent minor enhancements. Don't count them in capacity planning. They serve to insulate the feature teams from distractions. Rotate those assignments for each project so that everyone takes a turn. |
|
| ▲ | kubb 5 hours ago | parent | prev | next [-] |
| You're saying it would be convenient for you to know the future. It would also be convenient for me. That said, if you haven't done very similar work in the past, it's very unlikely you'll know exactly how much time it will take. In practice developers have to "handle" the people requesting hard deadlines. Introduce padding into the estimate to account for the unexpected. Be very specific about milestones to avoid expectation of the impossible. Communicate missed milestones proactively, and there will be missed milestones. You're given a date to feel safe. And sometimes you'll cause unnecessary crunch in order for a deadline you fought for to be met. Other times, you'll need to negotiate what to drop. But an accurate breakdown of a project amounts to executing that project. Everything else is approximation and prone to error. |
|
| ▲ | dorn64 4 hours ago | parent | prev [-] |
| It all starts with sales and marketing cramming every possible feature and half-rumour they heard about competitors' features into a 6 month project deadline. That's a long time, 6 months, no? How hard can it be? Respectfully, it'll be done when it's done. |
| |
| ▲ | replygirl 4 hours ago | parent [-] | | we are the ones qualified to say what needs to be cut to provide reasonable certainty for the deadline. it is not the job of non-technical stakeholders to mitigate risk in technical projects |
|