Remix.run Logo
vlovich123 3 days ago

Is this an actually good explanation? The introduction immediately made me pause:

> In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit

No in classical computers memory is corrected for using error correction not duplicating bits and majority voting. Duplicating bits would be a very wasteful strategy if you can add significantly fewer bits and achieve the same result which is what you get with error correction techniques like ECC. Maybe they got it confused with logic circuits where there’s not any more efficient strategy?

ziofill 3 days ago | parent | next [-]

Physicist here. Classical error correction may not always be a straight up repetition code, but the concept of redundancy of information still applies (like parity checks).

In a nutshell, in quantum error correction you cannot use redundancy because of the no-cloning theorem, so instead you embed the qubit subspace in a larger space (using more qubits) such that when correctable errors happen the embedded subspace moves to a different "location" in the larger space. When this happens it can be detected and the subspace can be brought back without affecting the states within the subspace, so the quantum information is preserved.

adastra22 3 days ago | parent | next [-]

You are correct in the details, but not the distinction. This is exactly how classical error correction works as well.

immibis 3 days ago | parent | prev | next [-]

This happens to be the same way that classical error correction works, but quantum.

jessriedel 3 days ago | parent | prev [-]

Just an example to expand on what others are saying: in the N^2-qubit Shor code, the X information is recorded redundantly in N disjoint sets of N qubits each, and the Z information is recorded redundantly in a different partitioning of N disjoint sets of N qubits each. You could literally have N observers each make separate measurements on disjoint regions of space and all access the X information about the qubit. And likewise for Z. In that sense it's a repetition code.

adastra22 3 days ago | parent [-]

That’s also correct but not what the sibling comments are saying ;)

There are quantum error correction methods which more resemble error correction codes rather than replication, and that resemblance is fundamental: they ARE classical error correction codes transposed into quantum operations.

jessriedel 9 hours ago | parent [-]

I understand that. I’m giving an example that is an instance of the sibling commenters’ claims that more transparently rebuts the idea that quantum codes cannot use redundancy because of no cloning.

abdullahkhalids 3 days ago | parent | prev | next [-]

While you are correct, here is a fun side fact.

The electric signals inside a (classical) processor or digital logic chip are made up of many electrons. Electrons are not fully well behaved and there are often deviations from ideal behavior. Whether a signal gets interpreted as 0 or 1 depends on which way the majority of the electrons are going. The lower the power you operate at, the fewer electrons there are per signal, and the more errors you will see.

So in a way, there is a a repetition code in a classical computer (or other similar devices such as an optical fiber). Just in the hardware substrate, not in software.

abtinf 3 days ago | parent | prev | next [-]

This seems like the kind of error an LLM would make.

It is essentially impossible for a human to confuse error correction and “majority voting”/consensus.

GuB-42 3 days ago | parent | next [-]

I don't believe it is the result of a LLM, more like an oversimplification, or maybe a minor fuckup on the part of the author as simple majority voting is often used in redundant systems, just not for memories as there are better ways.

And for a LLM result, this is what ChatGPT says when asked "How does memory error correction differ from quantum error correction?", among other things.

> Relies on redundancy by encoding extra bits into the data using techniques like parity bits, Hamming codes, or Reed-Solomon codes.

And when asked for a simplified answer

> Classical memory error correction fixes mistakes in regular computer data (0s and 1s) by adding extra bits to check for and fix any errors, like a safety net catching flipped bits. Quantum error correction, on the other hand, protects delicate quantum bits (qubits), which can hold more complex information (like being 0 and 1 at the same time), from errors caused by noise or interference. Because qubits are fragile and can’t be directly measured without breaking their state, quantum error correction uses clever techniques involving multiple qubits and special rules of quantum physics to detect and fix errors without ruining the quantum information.

Absolutely no mention of majority voting here.

EDIT: GPT-4o mini does mention majority voting as an example of a memory error correction scheme but not as the way to do it. The explanation is overall more clumsy, but generally correct, I don't know enough about quantum error correction to fact-check.

mmooss 3 days ago | parent | prev | next [-]

People always have made bad assumptions or had misunderstandings. Maybe the author just doesn't understand ECC and always assumed it was consensus-based. I do things like that (I try not to write about them without verifying); I'm confident that so do you and everyone reading this.

Suppafly 3 days ago | parent [-]

>Maybe the author just doesn't understand ECC and always assumed it was consensus-based.

That's likely, or it was LLM output and the author didn't know enough to know it was wrong. We've seen that in a lot of tech articles lately where authors assume that something that is true-ish in one area is also true in another, and it's obvious they just don't understand other area they are writing about.

fnordpiglet 3 days ago | parent [-]

Frankly every state of the art LLM would not make this error. Perhaps GPT3.5 would have, but the space of errors they tend to make now is in areas of ambiguity or things that require deductive reasoning, math, etc. Areas that are well described in literature they tend to not make mistakes.

jc_92 2 days ago | parent | prev [-]

I've gone to lots of talks on quantum error correction, and most of them start out with explaining the repetition code. Not because it's widely used, but because it is very easy to explain to someone who knows nothing about classical coding theory. And because the surface code is essentially the product of two repetition codes, so if you want to understand surface code quantum error correction you don't need to understand any classical codes besides the repetition code.

All that is to say that someone who had been to a few talks on quantum error correction but didn't directly work on that problem might reasonably believe that the repetition code is an important classical code.

outworlder 3 days ago | parent | prev | next [-]

That threw me off as well. Majority voting works for industries like aviation, but that's still about checking results of computations, not all memory addresses.

Karliss 3 days ago | parent | prev | next [-]

By a somewhat generous interpretation classical computer memory depends on implicit duplication/majority vote in the form of increased cell size of each bit instead of discrete duplication. Same way as repetition of signal sent over the wire can mean using lower baudrate and holding the signal level for longer time. A bit isn't stored in single atom or electron. A cell storing single bit can be considered a group of smaller cells connected in parallel storing duplicate value. And the majority vote happens automatically in analog form as you read total sum of the charge within memory cell.

Depending on how abstractly you talk about computers (which can be the case when contrasting quantum computing with classical computing), memory can refer not just to RAM but anything holding state and classical computer refer to any computing device including simple logic circuits not your desktop computer. Fundamentally desktop computers are one giant logic circuits.

Also RAID-1 is a thing.

At higher level backups are a thing.

So I would say there enough examples of practically used duplication for the purpose of error resistance in classical computers.

vlovich123 2 days ago | parent | next [-]

RAID-1 does not do on the fly error detection or correction. When you do a read you read from one of the disks with a copy but don't validate. You can probably initiate an explicit recovery if you suspect there's an error but that's not automatic. RAID is meant to protect against the entire disk failing but you just blindly assume the non-failing disk is completely error free. FWIW no formal RAID level I'm aware of does majority voting. Any error detection/correction is implemented through parity bits with all the problems that parity bits entail unless you use erasure code versions of RAID 6.

The reason things work this way is you'd have 2x read amplification on the bus for error detection and 3x read amplification on the bus for majority-voting error correction & something in the read I/O hot path validating the data reducing latency further. Additionally, RAID-1 is 1:1 mirroring so it can't do error correction automatically at all because it doesn't know which copy is the error-free. At best it can transparently handle errors when the disk refuses to service the request but it cannot handle corrupt data errors that the disk doesn't notice. If you do FDE then you probably would notice corruption at least and be able to reliably correct even with just RAID-1 but I'm not sure if anyone leverages this.

RAID-1 and other backup / duplication strategies are for durability and availability but importantly not for error correction. Error correction for durable storage is typically handled by modern techniques based on erasure codes while memory typically uses Hamming codes because they were the first ones, are cheaper to implement, and match better to RAM needs than Reed Solomon codes. Raptor codes are more recent but patents are owned by Qualcomm; some have expired but there are continuation patents that might cover it.

mathgenius 3 days ago | parent | prev [-]

Yes, and it's worth pointing out these examples because they don't work as quantum memories. Two more: magnetic memory based on magnets which are magnetic because they are build from many tiny (atomic) magnets, all (mostly) in agreement. Optical storage is similar, much like parent's example of a signal being slowly sent over a wire.

So the next question is why doesn't this work for quantum information? And this is a really great question which gets at the heart of quantum versus classical. Classical information is just so fantastically easy to duplicate that normally we don't even notice this, it's just too obvious a fact... until we get to quantum.

weinzierl 3 days ago | parent | prev | next [-]

Maybe they were thinking of control systems where duplicating memory, lockstep cores and majority voting are used. You don't even have to go to space to encounter such a system, you likely have one in your car.

bramathon 3 days ago | parent | prev | next [-]

The explanation of Google's error correction experiment is basic but fine. People should keep in mind that Quantum Machines sells control electronics for quantum computers which is why they focus on the control and timing aspects of the experiment. I think a more general introduction to quantum error correction would be more relevant to the Hackernews audience.

wslh 3 days ago | parent | prev | next [-]

> > In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit

The author clearly doesn't know about the topic neither him studied the basics on some undegraduate course.

EvgeniyZh 3 days ago | parent | prev | next [-]

It's just a standard example of a code that works classically but not quantumly to demonstrate the differences between the two. More or less any introductory talk on quantum error correction would mention it.

graycat 3 days ago | parent | prev | next [-]

Error correction? Took a graduate course that used

W.\ Wesley Peterson and E.\ J.\ Weldon, Jr., {\it Error-Correcting Codes, Second Edition,\/} The MIT Press, Cambridge, MA, 1972.\ \

Sooo, the subject is not nearly new.

There was a lot of algebra with finite field theory.

UniverseHacker 3 days ago | parent | prev | next [-]

ECC is not easy to explain, and sounds like a tautology rather than an explanation "error correction is done with error correction"- unless you give a full technical explanation of exactly what ECC is doing.

marcellus23 3 days ago | parent [-]

Regardless of whether the parent's sentence is a tautology, the explanation in the article is categorically wrong.

bawolff 3 days ago | parent | next [-]

Categorically might be a bit much. Duplicating bits with majority voting is an error correction code, its just not a very efficient one.

Like its wrong, but its not like its totally out of this world wrong. Or more speciglficly its in the correct category.

vlovich123 3 days ago | parent [-]

It's categorically wrong to say that that's how memory is error corrected in classical computers because it is not and never has been how it was done. Even for systems like S3 that replicate, there's no error correction happening in the replicas and the replicas are eventually converted to erasure codes.

bawolff 3 days ago | parent [-]

I'm being a bit pedantic here, but it is not categorically wrong. Categorically wrong doesn't just mean "very wrong" it is a specific type of being wrong, a type that this isn't.

Repetition codes are a type of error correction code. It is thus in the category of error correction codes. Even if it is not the right error correction codes, it is in the correct category, so it is not a categorical error.

Dylan16807 3 days ago | parent | next [-]

I interpret that sentence as taking about real computers, which does put it outside the category.

bawolff a day ago | parent [-]

That's the definition of a normal error not a category error.

If you disagree, what do you see as something that would be in the correct category but wrong in the sentence?

The normal definition of category error is something that is so wrong it doesn't make sense on a deep level. Like for example if they suggested quicksort as an error correction code.

The mere fact we are talking about "real" computers should be a tip off its not a category error, since people can build new computers. Category errors are wron a priori. Its possible someone tomorrow will build a computer using a repetition code for error correcting. It is not possible they will use quicksort for ECC. Repetition codes is in the right category of things even if it is the wrong instance. Quicksort is not in the right category.

Dylan16807 a day ago | parent [-]

> The normal definition of category error is something that is so wrong it doesn't make sense on a deep level.

Can you show me a definition that says that about the phrase "categorically wrong"?

And I think the idea that computers could change is a bit weak.

cycomanic 3 days ago | parent | prev [-]

Well it's about as categorically wrong as saying quantum computers use similar error correction algorithms as classical computers. Categorically both are are error correction algorithms.

vlovich123 3 days ago | parent | prev | next [-]

Yeah, I couldn't quite remember if ECC is just hamming codes or is using something more modern like fountain codes although those are technically FEC. So in the absence of stating something incorrectly I went with the tautology.

cortesoft 3 days ago | parent | prev [-]

Eh, I don’t think it is categorically wrong… ECCs are based on the idea of sacrificing some capacity by adding redundant bits that can be used to correct for some number of errors. The simplest ECC would be just duplicating the data, and it isn’t categorically different than real ECCs used.

vlovich123 3 days ago | parent [-]

Then you're replicating and not error correcting. I've not seen any replication systems that use the replicas to detect errors. Even RAID 1 which is a pure mirroring solution only fetches one of the copies when reading & will ignore corruption on one of the disks unless you initiate a manual verification. There are technical reasons why that is related to read amplification as well as what it does to your storage cost.

cortesoft 3 days ago | parent [-]

I guess that is true, pure replication would not allow you to correct errors, only detect them.

However, I think explaining the concept as duplicating some data isn’t horrible wrong for non technical people. It is close enough to allow the person to understand the concept.

vlovich123 3 days ago | parent [-]

To be clear. A hypothetical replication system with 3 copies could be used to correct errors using majority voting.

However, there's no replication system I've ever seen (memory, local storage, or distributed storage) that detects or corrects for errors using replication because of the read amplification problem.

bawolff 3 days ago | parent [-]

https://en.wikipedia.org/wiki/Triple_modular_redundancy

vlovich123 2 days ago | parent [-]

The ECC memory page has the same non sensical statement:

> Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy (TMR). The latter is preferred because its hardware is faster than that of Hamming error correction scheme.[16] Space satellite systems often use TMR,[17][18][19] although satellite RAM usually uses Hamming error correction.[20]

So it makes it seem like TMR is used for memory only to then back off and say it’s not used for it. ECC RAM does not use TMR and I suggest that the Wikipedia page is wrong and confused about this. The cited links on both pages are either dead or are completely unrelated, discussing TMR within the context of fpgas being sent into space. And yes, TMR is a fault tolerance strategy for logic gates and compute more generally. It is not a strategy that has been employed for storage full stop and evidence to the contrary is going to require something stronger than confusing wording on Wikipedia.

refulgentis 3 days ago | parent | prev [-]

I think it's fundamentally misleading, even on the central quantum stuff:

I missed what you saw, that's certainly a massive oof. It's not even wrong, in the Pauli sense, i.e. it's not just a simplistic rendering of ECC.

It also strongly tripped my internal GPT detector.

Also, it goes on and on about realtime decoding, the foundation of the article is Google's breakthrough is real time, and the Google article was quite clear that it isn't real time.*

I'm a bit confused, because it seems completely wrong, yet they published it, and there's enough phrasing that definitely doesn't trip my GPT detector. My instinct is someone who doesn't have years of background knowledge / formal comp sci & physics education made a valiant effort.

I'm reminded that my throughly /r/WSB-ified MD friend brings up "quantum computing is gonna be big what stonks should I buy" every 6 months, and a couple days ago he sent me a screenshot of my AI app that had a few conversations with him hunting for opportunities.

* "While AlphaQubit is great at accurately identifying errors, it’s still too slow to correct errors in a superconducting processor in real time"

bramathon 3 days ago | parent | next [-]

This is not about AlphaQubit. It's about a different paper, https://arxiv.org/abs/2408.13687 and they do demonstrate real-time decoding.

> we show that we can maintain below-threshold operation on the 72-qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1 μs cycle duration

refulgentis 3 days ago | parent [-]

Oh my, I really jumped to a conclusion. And what fantastic news to hear. Thank you!

vlovich123 3 days ago | parent | prev [-]

Yeah, I didn't want to just accuse the article of being AI generated since quantum isn't my specialty, but this kind of error instantly tripped my "it doesn't sound like this person knows what they're talking about alarm" which likely indicates a bad LLM helped summarize the quantum paper for the author.