Remix.run Logo
Suppafly 3 days ago

>Maybe the author just doesn't understand ECC and always assumed it was consensus-based.

That's likely, or it was LLM output and the author didn't know enough to know it was wrong. We've seen that in a lot of tech articles lately where authors assume that something that is true-ish in one area is also true in another, and it's obvious they just don't understand other area they are writing about.

fnordpiglet 3 days ago | parent [-]

Frankly every state of the art LLM would not make this error. Perhaps GPT3.5 would have, but the space of errors they tend to make now is in areas of ambiguity or things that require deductive reasoning, math, etc. Areas that are well described in literature they tend to not make mistakes.