| ▲ | thomasahle 4 hours ago |
| There's a graveyard of 100s of papers with "approximate near linear time attention." They always hope the speed increase makes up for the lower quality, but it never does. The quadratic time seems inherent to the problem. Indeed, there are lower bounds showing that sub n^2 algorithms can't work: https://arxiv.org/pdf/2302.13214 |
|
| ▲ | jcarreiro 3 hours ago | parent | next [-] |
| The paper says that: > In practice, we find that four Taylor terms (P = 4) suffice for
recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution, acceptable for many AI applications. ie., the claim is that this method reproduces the results of conventional attention, up to float16 numerical precision. |
| |
| ▲ | kristjansson 35 minutes ago | parent | next [-] | | > approximately the same magnitude and they really do mean that, their results show +/- 1 on log10 plots. | |
| ▲ | fheinsen 2 hours ago | parent | prev | next [-] | | The method is more general. The github repository's first example is with eight Taylor terms (P = 8). | |
| ▲ | energy123 2 hours ago | parent | prev [-] | | It converges on conventional attention as P goes up |
|
|
| ▲ | kristjansson 3 hours ago | parent | prev | next [-] |
| > self-attention is efficiently computable to arbitrary precision with constant cost per token This paper at least aspires to reproduce 'true' attention, which distinguishes it from many of the others. TBD if its successful in that. |
| |
| ▲ | logicchains 3 hours ago | parent | next [-] | | It can't be successful at that any more than 1+1 can equal 3. Fundamentally, if every token wants to be able to look at every previous token without loss of information, it must be O(n^2); N tokens looking at N tokens is quadratic. Any sub-quadratic attention must hence necessarily lose some information and be unable to support perfect recall on longer sequences. | | |
| ▲ | orlp 42 minutes ago | parent | next [-] | | > N tokens looking at N tokens is quadratic Convolving two arrays can be done perfectly accurately in O(n log n), despite every element being combined with every other element. Or consider the even more basic sum of products a[i] * b[j] for all possible i, j: total = 0
for i in range(len(a)):
for j in range(len(b)):
total += a[i] * b[j]
This can be computed in linear time as sum(a) * sum(b).Your logic that 'the result contains terms of all pairs, therefore the algorithm must be quadratic' simply doesn't hold. | |
| ▲ | hellohello2 2 hours ago | parent | prev | next [-] | | I'm not saying if the paper is correct or not (since I can't tell), but I don't think your argument really holds. Consider applying it to multiplication: Fundamentally, multiplication need to look at every pair of integer from the two input numbers. It must be O(n^2); N digits looking at N other digits is quadratic. Any sub-quadratic multiplication must hence necessarily lose some information. | | |
| ▲ | actionfromafar 2 hours ago | parent [-] | | Doesn't that have to do with how many bits you allow in the actual calculation in physical reality? | | |
| ▲ | hellohello2 38 minutes ago | parent [-] | | Well, for multiplication complexity is defined in terms of on the number of digits/bits digits directly. For attention, complexity is defined on terms of the number of input vectors which are all at fixed precision. I don't understand what happens to the method proposed in the paper at higher precision (since I don't understand the paper), but in reality in doesn't matter since there is no value in anything over float16 for machine learning. |
|
| |
| ▲ | oasisaimlessly 2 hours ago | parent | prev | next [-] | | That argument could also be used to say that the FFT's time complexity of O(n log n) should be impossible. | |
| ▲ | naasking an hour ago | parent | prev [-] | | Your argument just assumes there is no latent structure that can be exploited. That's a big assumption. |
| |
| ▲ | energy123 3 hours ago | parent | prev [-] | | It's like claims of room temperature superconductors or millenium prize solutions. Earth shattering if true. It'd be such a black swan. Terrible for Nvidia. | | |
| ▲ | SeanAnderson 3 hours ago | parent [-] | | Well, we solved one of the Millennium Prize problems (honestly kinda quickly) so maybe there's hope :) |
|
|
|
| ▲ | fheinsen 4 hours ago | parent | prev | next [-] |
| As the error via linear approximation approaches similar magnitude as numerical error via quadratic computation, don’t the two start becoming comparable in practice? I ask because in practice, for inference, attention is typically computed with low-precision (4-bit, 8-bit, 16-bit) floats. Numerical error, in fact, may be a key factor as to why quadratic attention, in practice, exhibits context rot as context gets longer, analogous to an RNN: https://www.anthropic.com/engineering/effective-context-engi... |
|
| ▲ | WhitneyLand 2 hours ago | parent | prev | next [-] |
| The 2023 paper even if true doesn’t preclude the 2026 paper from being true, it just sets constraints on how a faster attention solution would have to work. |
|
| ▲ | cobolexpert 4 hours ago | parent | prev | next [-] |
| Dumb question: is the quadratic time complexity for training, inference, or both? |
| |
| ▲ | dave_universetf 2 hours ago | parent | next [-] | | Both, with caveats. The attention computation is fundamentally quadratic: for every token in the sequence, you're doing a computation that has to compute over every other token in the sequence. So it's O(N) per token, O(N^2) for the whole sequence. The big mitigation for this is that in causal transformers (i.e. all the chatbot type applications, where each token is only allowed to see tokens before it), you're running inference repeatedly on the same prefix in order to grow it by one token at a time. So if you cache the computations for tokens 0..N-1, on each inference pass you only have to compute O(N) for the newly added token at the end of the sequence. That's why caching (and caching charges) appear so prominently everywhere in the pricing of inference. In practice, caching is most beneficial at inference time, because you typically have relatively long conversations that start with the same cacheable prefix (the system prompt). At training time the same optimization can apply, but you're typically not pushing the same prefixes through the model repeatedly so you end up paying the quadratic cost more often. The quadratic cost of attention is the fundamental compute bottleneck for transformer architectures, which is why there's research like this trying to find shortcuts in computing attention, as well as research into completely new primitives to replace attention (e.g. SSM, which is O(N) on a cold cache and O(1) on a warm cache). | |
| ▲ | omneity 4 hours ago | parent | prev [-] | | Attention is calculated during the forward pass of the model, which happens in both inference (forward only) and training (forward & backward). | | |
| ▲ | SubiculumCode 3 hours ago | parent [-] | | Dumb question: Can inference be done in a reverse pass? Outputs predicting inputs? | | |
| ▲ | dave_universetf an hour ago | parent | next [-] | | Strictly speaking: no. The "forward pass" terminology does not imply that there exists a "reverse pass" that does the same kind of computation. Rather, it's describing two different kinds of computation, and the direction they occur in. The forward pass is propagating from inputs to outputs, computing the thing the model was trained for. The reverse/backwards pass is propagating from outputs back to inputs, but it's calculating the gradients of parameters for training (rougly: how much changing each parameter in isolation affects the output, and whether it makes the output closer to the desired training output). The result of the "reverse pass" isn't a set of inputs, but a set of annotations on the model's parameters that guide their adjustment. The computations of the forward pass are not trivially reversible (e.g. they include additions, which destroys information about the operand values). As a sibling thread points out, you can still probabilistically explore what inputs _could_ produce a given output, and get some information back that way, but it's a lossy process. And of course, you could train a "reverse" model, one that predicts the prefix of a sequence given a suffix (trivially: it's the same suffix prediction problem, but you train it on reversed sequences). But that would be a separate model trained from scratch on that task, and in that model the prefix prediction would be its forward pass. | |
| ▲ | gpm 3 hours ago | parent | prev | next [-] | | Not as trivially as the forwards direction, unsurprisingly information is lost, but better than you might expect. See for example https://arxiv.org/pdf/2405.15012 | |
| ▲ | root_axis 3 hours ago | parent | prev [-] | | Sounds like a great premise for a sci-fi short story. | | |
|
|
|
|
| ▲ | naasking 3 hours ago | parent | prev | next [-] |
| I think any kind of innovation here will have to take advantage of some structure inherent to the problem, like eliminating attention in favour of geometric structures like Grassman flows [1]. [1] Attention Is Not What You Need, https://arxiv.org/abs/2512.19428 |
| |
| ▲ | findalex 3 hours ago | parent [-] | | Right - e.g., if you're modeling a physical system it makes sense to bake in some physics - like symmetry. | | |
| ▲ | naasking 2 hours ago | parent [-] | | Indeed, and I think natural language and reasoning will have some kind of geometric properties as well. Attention is just a sledgehammer that lets us brute force our way around not understanding that structure well. I think the next step change in AI/LLM abilities will be exploiting this geometry somehow [1,2]. [1] GrokAlign: Geometric Characterisation and Acceleration of Grokking, https://arxiv.org/abs/2510.09782 [2] The Geometry of Reasoning: Flowing Logics in Representation Space, https://arxiv.org/abs/2506.12284 |
|
|
|
| ▲ | 4 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | cubefox 4 hours ago | parent | prev | next [-] |
| I think DeepSeek V3.2 is sub n^2, but it clearly performs quite well, refuting the alleged lower bounds in the paper. |
| |
| ▲ | andy12_ 3 hours ago | parent | next [-] | | It really isn't sub N^2. The main attention is only O(Nk), but only thanks to a lightning indexer that still has complexity O(N^2). So overall it still has the same complexity; just with a smaller constant factor [1] > DSA reduces the core attention complexity of the main model from O(L^2) to O(Lk), where k (<< L) is the number of selected tokens. Although the lightning indexer still has a complexity of O(L^2), it requires much less computation compared with MLA in DeepSeek-V3.1-Terminus [1] https://arxiv.org/pdf/2512.02556 | | |
| ▲ | cubefox 2 hours ago | parent [-] | | Okay, then let's see whether we are going to see real linear architectures, like Gated DeltaNet or Mamba-3, in some larger models. I don't believe there is a "lower bound" which states that those can never get to (or exceed) the real-world performance of quadratic attention. (Perfect recall in unrealistic needle-in-haystack tests doesn't count.) |
| |
| ▲ | 3 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | clarity_hacker 4 hours ago | parent | prev [-] |
| [dead] |