| ▲ | ekidd 16 hours ago | |||||||
In my professional life, somewhere over 99% of time, the code suffering the error has either been: 1. Production code running somewhere on a cluster. 2. Released code running somewhere on a end-user's machine. 3. Released production code running somewhere on an end-user's cluster. And errors happen at weird times, like 3am on a Sunday morning on someone else's cluster. So I'd just as soon not have to wake up, figuring out all the paperwork to get access to some other company's cluster, and then figure out how to attach a debugger. Especially when the error is some non-reproducible corner case in a distributed algorithm that happens once every few months, and the failing process is long gone. Just no. It is so much easier to ask the user to turn up logging and send me the logs. Nine times out of ten, this will fix the problems. The tenth time, I add more logs and ask the user to keep an eye open. | ||||||||
| ▲ | TZubiri 12 hours ago | parent [-] | |||||||
I think I get the idea, gdb is too powerful. For contexts where operator is distinct from manufacturer, the debug/logging tool needs to be weaker and not ad-hoc so it can be audited and to avoid exfiltrating user data. | ||||||||
| ||||||||