▲ | s1dev 6 days ago | |
When maintaining a quantum memory, you measure parity checks of the quantum error correcting code. These parity checks don't contain any information about the logical state, just (partial) information about the error, so the logical quantum information remains coherent through the process (i.e. the logical part of the state is not collapsed). These measurements are classical data, and a computation is required in order to infer the most likely error that led to the measured syndrome. This process is known as decoding. This work is a model that acts as a decoding algorithm for a very common quantum code -- the surface code. The surface code is somewhat like the quantum analog of a repetition code in a sense. | ||
▲ | abdullahkhalids 6 days ago | parent [-] | |
I would instead give the example of the Hamming code. As you probably know, you can construct a quantum code, the Steane code, which is just analogous to Hamming code. The Steane code is the simplest triangular color code. i.e. you can arrange all the qubits on a 2D triangular lattice, and only do nearest neighbor interactions [1]. The surface code is a similar quantum code, in which the qubits can also be placed on a 2D lattice, except that lattice is made up of squares. Why do we care about 2D surfaces and nearest neighbor interactions. Because it makes building quantum hardware easier. EDIT: [1] The Steane code's picture is shown here. https://errorcorrectionzoo.org/c/steane Seven data qubits are on the vertices of the triangles. 2 syndrome qubits on each of the faces. |