| ▲ | almostgotcaught a day ago | |
> The time it needs to run is irrelevant for its correctness. And so they can stack and stack and stack This is a very naive take - the very direct translation of what you're saying doesn't happen does in happen in analysis all the time: there are many inequalities which can be "stacked" to prove a bound on something but their factors are too large so you cannot just stack them if you need a fixed bound for your proof to go through. Unsurprisingly this is exactly how actual runtime analysis also works (it's unsurprising because they're both literally math). | ||
| ▲ | zelphirkalt 17 hours ago | parent [-] | |
I think you are taking it a bit out of context here. Obviously, I assumed in that phrase, that mathematicians are stacking suitable methods and proofs. If some factors are too large to not fit in some bound, then obviously that's not something you would stack. But once you have suitable proofs and proven correct and suitable methods, you can stack, and correctness does not go out of the window. Correctness remaining, you will be able to get to a proven correct result. Of course proving things in mathematics is also a lot harder, usually, than computer programming, and it is probably still easy to make mistakes. | ||