▲ | moron4hire 2 days ago | |||||||||||||||||||
Did I not say that? | ||||||||||||||||||||
▲ | haskellshill 2 days ago | parent | next [-] | |||||||||||||||||||
>we end up performing two loops instead of one, all because sticking three operations in one statement is "yucky" You seem to believe that "O(2n)"
is slower than "O(n2)"
simply because the latter has one "for loop" less. Am I misunderstanding you, or if not, why would this matter for speed? | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | codebje 2 days ago | parent | prev | next [-] | |||||||||||||||||||
You did, but it might not be an effective strategy to mention asymptotic complexity to help forward your argument that one linear implementation is faster than another. Whether it's a win in Python to use one or two loops isn't so clear, as a lot is hidden behind complex opcodes and opaque iterator implementations. Imperative testing might help, but a new interpreter version could change your results. In any case, if we want to nitpick over performance we should be insisting on a parallel implementation to take advantage of the gobs of cores CPUs now have, but now we're on a micro-optimisation crusade and are ignoring the whole point of the article. | ||||||||||||||||||||
▲ | iainmerrick a day ago | parent | prev [-] | |||||||||||||||||||
You said the code from the article is O(2n) when it could be O(n), but those are the same thing. |