| ▲ | threeducks 17 hours ago | |||||||
Without looking at the code, O(N * k) with N = 9000 points and k = 50 dimensions should take in the order of milliseconds, not seconds. Did you profile your code to see whether there is perhaps something that takes an unexpected amount of time? | ||||||||
| ▲ | romanfll 14 hours ago | parent | next [-] | |||||||
The '2 seconds' figure comes from the end-to-end time on a standard laptop. I quoted 2s to set realistic expectations for the user experience, not the CPU cycle count. You are right that the core linear algebra (Ax=b) is milliseconds. The bottleneck is the DOM/rendering overhead, but strictly speaking, the math itself is blazing fast. | ||||||||
| ||||||||
| ▲ | jdhwosnhw 10 hours ago | parent | prev | next [-] | |||||||
Thats not how big-O notation works. You don’t know what proportionality constants are being hidden by the notation so you cant make any assertions about absolute runtimes | ||||||||
| ||||||||
| ▲ | donkeybeer 17 hours ago | parent | prev | next [-] | |||||||
If he wrote the for loop in python instead of numpy or C or whatever it could be a plausible runtime. | ||||||||
| ▲ | yorwba 16 hours ago | parent | prev [-] | |||||||
Each of the N data points is processed through several expensive linear algebra operations. O(N * k) just expresses that if you double N, the runtime also at most doubles. It doesn't mean it has to be fast in an absolute sense for any particular value of N and k. | ||||||||
| ||||||||