Remix.run Logo
kittikitti 3 days ago

This is another hype piece from Google's research and development arm. This is a theoretical application to increase the number of logical qubits in a system by decreasing the error caused by quantum circuts. They just didn't do the last part yet so the application is yet to be seen.

https://arxiv.org/abs/2408.13687

"Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."

Google forgot to test if it scales I guess?

wasabi991011 3 days ago | parent | next [-]

It's the opposite of a theoretical application, and it's not a hype piece. It's more like an experimental confirmation of a theoretical result mixed with an engineering progress report.

They show that a certain milestone was achieved (error rate below the threshold), show experimentally that this milestone implies what theorists predicted, talk about how this milestone was achieved, and characterize the sources of error that could hinder further scaling.

They certainly tested how it scales up to the scale that they can build. A major part of the paper is how it scales.

>> "Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."

> Google forgot to test if it scales I guess?

Remember that quantum computers are still being built. The paper is the equivalent of

> We tested the scaling by comparing how our algorithm runs on a chromebook, a server rack, and google's largest supercomputing cluster and found it scales well.

The sentence you tried to interpret was, continuing this analogy, the equivalent of

>Google's largest supercomputing cluster is not large enough for us, we are currently building an even bigger supercomputing cluster, and when we finish, our algorithm should (to the best of our knowledge) continue along this good scaling law.

Strilanc 3 days ago | parent | prev | next [-]

The experiment is literally all about scaling. It tests scaling from distance 3 to 5 to 7. It shows the logical qubit lifetime doubles each time the distance is increased. The sentence you quoted is describing an expectation that this doubling will continue to larger distances, when larger chips are built.

This is the first quantum error correction experiment showing actual improvement as size is increased (without any cheating such as postselection or only running for a single step). It was always believed in theory that bigger codes should have more protection, but there are have been various skeptics over the years saying you'd never actually see these improvements in practice due to the engineering difficulty or due to quantum mechanics breaking down or something.

Make no mistake; much remains to be done. But this experiment is a clear indication of progress. It demonstrates that error correction actually works. It says that quantum computers should be able to solve qubit quality with qubit quantity.

disclaimer: worked on this experiment

computerdork a day ago | parent [-]

Very neat!

wholinator2 3 days ago | parent | prev [-]

Lol yeah the whole problem with quantum computation is the scaling, that's literally the entire problem. It's trivial to make a qbit, harder to make 5, impossible to make 1000. "If it scales" is just wishy washy language to cover, "in the ideal scenario where everything works perfectly and nothing goes wrong, it will work perfectly"