| ▲ | sobellian 7 hours ago | |
Machine eps provides the maximum rounding error for a single op. Let's say I write:
Now is `x` actually the square root of 2? Of course not - because the digit expansion of sqrt(2) doesn't terminate, the only way to precisely represent it is with symbolics. So what do we actually have? `x` was either rounded up or down to a number that does have an exact FP representation. So, `x` / sqrt(2) is in `[1 - eps, 1 + eps]`. The eps tells you, on a relative scale, the maximum distance to an adjacent FP number for any real number. (Full disclosure, IDK how this interacts with weird stuff like denormals).Note that in general we can only guarantee hitting this relative error for single ops. More elaborate computations may develop worse error as things compound. But it gets even worse. This error says nothing about errors that don't occur in the machine. For example, say I have a test that takes some experimental data, runs my whiz-bang algorithm, and checks if the result is close to elementary charge of an electron. Now I can't just worry about machine error but also a zillion different kinds of experimental error. There are also cases where we want to enforce a contract on a number so we stay within acceptable domains. Author alluded to this. For example - if I compute some `x` s.t. I'm later going to take `acos(x)`, `x` had better be between `[-1, 1]`. `x >= -1 - EPS && x <= 1 + EPS` wouldn't be right because it would include two numbers, -1 - EPS and 1 + EPS, that are outside the acceptable domain. - "I want to relax exact equality because my computation has errors" -> Make `assert_rel_tol` and `assert_abs_tol`. - "I want to enforce determinism" -> exact equality. - "I want to enforce a domain" -> exact comparison Your code here is using eps for controlling absolute error, which is already not great since eps is about relative error. Unfortunately your assertion degenerates to `a == b` for large numbers but is extremely loose for small numbers. | ||