| ▲ | FranckDernoncou 12 hours ago | ||||||||||||||||||||||||||||||||||
Paper: https://arxiv.org/abs/2605.12825 ; Code+models: https://github.com/chiennv2000/orthrus ; Disclosure: co-author. Idea: Inject a trainable diffusion attention module into each layer of a frozen AR Transformer. Both heads share one KV cache. Diffusion head projects K=32 tokens in parallel; AR head verifies in a second pass and accepts the longest matching prefix. Output distribution is provably identical to the base model. Results: - Up to 7.8x TPF, ~6x wall-clock on MATH-500. - 16% of params trained, <1B tokens, 24h on 8xH200. - vs. diffusion LMs (Dream, Fast-dLLM-v2, SDAR, Mercury, Gemini Diffusion): they modify base weights and lose accuracy (Fast-dLLM-v2: -11 pts on MATH-500). Orthrus freezes the backbone; accuracy matches Qwen3-8B exactly. - vs. Speculative Decoding (EAGLE-3, DFlash): no external drafter, no separate cache, zero TTFT penalty (no drafter to init/sync). KV overhead is O(1) (~4.5 MiB flat). Acceptance length on MATH-500: 11.7 vs. 7.9 (DFlash) vs. 3.5 (EAGLE-3). - Single-step denoising beats multi-step (6.35 vs. 3.53 TPF). KL distillation beats CE on acceptance rate. Limitations: strictly bounded by the frozen base model (inherits its biases, hallucinations, knowledge gaps); Qwen3-only evaluation; greedy + rejection sampling only. | |||||||||||||||||||||||||||||||||||
| ▲ | ilaksh 6 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Amazing. Is it possible to do this with Qwen 3.6 27B? Will it work with quants (I assume so)? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | dot_treo 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Do you plan on releasing the training code? | |||||||||||||||||||||||||||||||||||
| ▲ | littlestymaar 2 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
So, it's D-Flash but at each transformer layer and share the KV cache of the original model? Very smart! | |||||||||||||||||||||||||||||||||||