|
| ▲ | cycomanic 5 hours ago | parent | next [-] |
| But these performance numbers are meaningless without some sort of standard comparison case. So if you measure that e.g. some string operation takes 100ns, how do you compare against the numbers given here? Any difference could be due to PC, python version or your implementation. So you have to do proper benchmarking anyway. |
|
| ▲ | willseth 5 hours ago | parent | prev [-] |
| You gauge with metrics and profiles, if necessary, and address as needed. You don’t scrutinize every line of code over whether it’s “reasonable” in advance instead of doing things that actually move the needle. |
| |
| ▲ | oivey 5 hours ago | parent [-] | | These are the metrics underneath it all. Profiles tell you what parts are slow relative to others and time your specific implementation. How long should it take to sum together a million integers? | | |
| ▲ | willseth 4 hours ago | parent [-] | | It literally doesn’t matter unless it impacts users. I don’t know why you would waste time on non problems. | | |
| ▲ | oivey 3 hours ago | parent [-] | | No one is suggesting “wasting time on non problems.” You’re tilting at windmills. | | |
|
|
|