| ▲ | tripplyons 2 hours ago | |
There are many ways to compute the same matrix multiplication that apply the sum reduction in different orders, which can produce different answers when using floating point values. This is because floating point addition is not truly associative because of rounding. | ||
| ▲ | spwa4 2 hours ago | parent [-] | |
Is that really going to matter in FP32, FP16 or BF16? I would think models would be written so they'd be at least somewhat numerically stable. Also if the inference provider guarantees specific hardware this shouldn't happen. | ||