| ▲ | throwaway85825 19 hours ago | |
A lot of what you see was removed was just test sensors. The same happens in every engineering program, but no one else pretends that it's somehow innovation. It's like removing test code when you ship a binary. | ||
| ▲ | jongjong 17 hours ago | parent [-] | |
I don't agree that it's not innovation. It always looks stupidly simple with hindsight to just remove unnecessary complexity, and yet it's extremely rare to see a team which actually does it right on the first go. Getting the design right the first time requires vision, foresight as well as a deep understanding of all relevant parts and priorities. Very few people can do it without hindsight. I'm an experienced software engineer and team lead who worked on a range of big complex projects over almost 2 decades and my experience with every single project (for which I wasn't the team lead) was that they were often way over-engineered. At least 95% of the time was spent on fixing unnecessary intermediate technical issues which the team itself created for itself. Even the sensor argument... Do you need so many sensors, monitoring and fallback mechanisms if every part of the system was designed to work within the simplest necessary constraints to begin with? My experience is that the answer is almost always; no. Once you accept that your design is flawed and needs runtime monitoring and fallbacks, any patch you add on top to correct the flaws provides tiny diminishing returns if any. Often, the additional complexity actually makes it more likely that your core mechanisms will fail. The safety mechanisms only end up making themselves useful by increasingly the likelihood of failure to begin with. My view on fallback mechanisms is that, in the event of failure of the main system, they shouldn't be so complex as to try to keep the system running as if nothing had happened; they should just provide graceful failure and sometimes they aren't needed at all. Just an error log is enough. | ||