| ▲ | khannn 6 hours ago | |
I had a job that required estimation on bug tickets. It's honestly amazing how they didn't realize that I'd take my actual estimate, then multiply it by 4, then use the extra time to work on my other bug tickets that the 4x multiplier wasn't good enough for. | ||
| ▲ | mewpmewp2 6 hours ago | parent | next [-] | |
That's just you hedging, they don't really need to know that. As long as if you are hedging accurately in the big picture, that's all that matters. They need estimates to be able to make decisions on what should be done and what not. You could tell them that 25% chance it's going to take 2 hours or less, 50% chance it's going to take 4 hours or less, 75% chance it's going to take 8 hours or less, 99% it's going to take 16 hours or less, to be accurate, but communication wise you'll win out if you just call items like those 10 hours or similar intuitively. Intuitively you feel that 10 hours seems safe with those probabilities (which are intuitive experience based too). So you probably would say 10 hours, unless something really unexpected (the 1%) happens. Btw in reality with above probabilities the actual average would be 5h - 6h with 1% tasks potentially failing, but even your intuitive probability estimations could be off so you likely want to say 10h. But anyhow that's why story points are mostly used as well, because if you say hours they will naturally think it's more fixed estimation. Hours would be fine if everyone understood naturally that it implies a certain statistical average of time + reasonable buffer it would take over a large amount of similar tasks. | ||
| ▲ | georgemcbay 2 hours ago | parent | prev [-] | |
Are you sure they didn't realize it...? Virtually everywhere I've ever worked has had an unwritten but widely understood informal policy of placing a multiple on predicted effort for both new code/features and bug fixing to account for Hofstadter's law. | ||