| ▲ | mackeye 2 hours ago | |
> For small n we can directly implement the definition. For large n, the direct approach would be slow and would accumulate floating point error. is there a reason the direct definition would be slow, if we cache the prior harmonic number to calculate the next? | ||
| ▲ | coherentpony an hour ago | parent [-] | |
It’s a natural observation, but it doesn’t address the floating point problem. I think the author should have said “fast or would accumulate floating point error” instead of “fast and would accumulate floating point error”. You could compute in the reverse direction, starting from 1/n instead of starting from 1, this would produce a stable floating point sum but this method is slow. Edit: Of course, for very large n, 1/n becomes unrepresentable in floating point. | ||