| ▲ | ziofill 3 days ago |
| https://github.com/BICLab/SpikingBrain-7B/blob/main/assets/t... Shouldn’t one bold the better numbers? |
|
| ▲ | rpunkfu 3 days ago | parent | next [-] |
| Inspired by GPT-5 presentation :) |
| |
| ▲ | doph 3 days ago | parent [-] | | Did they ever address that? I have not been able to stop thinking about it, it was so bizarre. | | |
| ▲ | rpunkfu 2 days ago | parent | next [-] | | No idea, didn't follow the news on it afterwards. | |
| ▲ | andy_ppp 2 days ago | parent | prev [-] | | I honestly thing they just styled it out as "more efficient" and their investors lapped it up. We've really not seen improvement since GPT4 for most things I do with is and it feels like a downgrade on o3-mini-high to me. |
|
|
|
| ▲ | daveguy 3 days ago | parent | prev [-] |
| Well, then none of their model's numbers would be bold and that's not what they/AIs usually see in publications! |
| |
| ▲ | cubefox 3 days ago | parent [-] | | They do look pretty good compared to the two other linear (non-Transformer) models. Conventional attention is hard to beat in benchmarks but it is quadratic in time and memory complexity. |
|