▲ | albertzeyer a day ago | ||||||||||||||||||||||
Why do you say FlexAttention is too buggy? I have heard about a lot of successful usages of it, and never heard about any such problems. Also note, depending on your model dimensions and sequence lengths, often the attention computation plays only a minor role (maybe 10% overall or so), and the MLP computation dominates. | |||||||||||||||||||||||
▲ | kouteiheika a day ago | parent [-] | ||||||||||||||||||||||
Last time I tried it I encountered both showstopper bugs (it was completely obviously broken) and subtle correctness bugs (it looked like it was working, but since I'm paranoid I have unit tests for everything and numerically the errors were too big compared to what you'd get with eager attention or Flash Attention), and it was too slow for my taste compared to Flash Attention so I just dropped it. And I wasn't even doing anything super exotic with it. Maybe it's better now, but I'd still consider using FlexAttention without a corresponding unit test checking its accuracy against an equivalent eager implementation completely irresponsible. | |||||||||||||||||||||||
|