▲ | sigmar 6 days ago | |||||||||||||||||||||||||||||||
>AlphaQubit, a recurrent-transformer-based neural-network architecture that learns to predict errors in the logical observable based on the syndrome inputs (Methods and Fig. 2a). This network, after two-stage training—pretraining with simulated samples and finetuning with a limited quantity of experimental samples (Fig. 2b)—decodes the Sycamore surface code experiments more accurately than any previous decoder (machine learning or otherwise) >One error-correction round in the surface code. The X and Z stabilizer information updates the decoder’s internal state, encoded by a vector for each stabilizer. The internal state is then modified by multiple layers of a syndrome transformer neural network containing attention and convolutions. I can't seem to find a detailed description of the architecture beyond this bit in the paper and the figure it references. Gone are the days when Google handed out ML methodologies like candy... (note: not criticizing them for being protective of their IP, just pointing out how much things have changed since 2017) | ||||||||||||||||||||||||||||||||
▲ | jncfhnb 6 days ago | parent [-] | |||||||||||||||||||||||||||||||
Eh. It was always sort of muddy. We never actually had an implementation of doc2vec as described in the paper. | ||||||||||||||||||||||||||||||||
|