Remix.run Logo
jcarreiro 3 hours ago

The paper says that:

> In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution, acceptable for many AI applications.

ie., the claim is that this method reproduces the results of conventional attention, up to float16 numerical precision.

kristjansson 36 minutes ago | parent | next [-]

> approximately the same magnitude

and they really do mean that, their results show +/- 1 on log10 plots.

fheinsen 2 hours ago | parent | prev | next [-]

The method is more general. The github repository's first example is with eight Taylor terms (P = 8).

energy123 2 hours ago | parent | prev [-]

It converges on conventional attention as P goes up