| ▲ | addaon a day ago | |||||||
> All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation. No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge. It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not. | ||||||||
| ▲ | stackghost a day ago | parent [-] | |||||||
HN is a very poor platform for good conversations so we'll have to agree to disagree, as I'm not willing to go further in this format | ||||||||
| ||||||||