▲ | rollcat 5 days ago | |||||||
First things first, I'm a crypto-sceptic - to put it in the mildest terms possible. You're spot on with CPU usage. However: how would you design a RasPi-efficient, fault-tolerant, decentralised ledger with strict ordering and a transparency log? Consider CAP. Existing banking systems choose partition tolerance (everyone does their own thing all the time basically), and eventual consistency via peering - which is why all settlements are delayed (in favour of fraud detection / mitigation), but you get huge transaction throughput, very high availability, and power efficiency. (Any existing inefficiencies can and should be optimised away, I guess we can blame complacency.) The system works based on distributed (each bank) but centralised (customer->bank) authority, held up by regulations, capital, and identity verification. Online authority works in practice - we collectively trust all the Googles, Apples, etc run our digital lives. Cryptocurrency enthusiasts trust the authors and contributors of the software, CPU/OS vendors, so it's not like we're anywhere near an absolute zero of authority. Online identity verification objectively sucks, so that is out the window. I guess this could work by individual users delegating to a "host" node (which is what is already happening with managed wallets), and host nodes peering with each other based on mutual trust. Kinda like Mastodon, email, or even autonomous systems - the backbone of the Internet itself. Just a brain dump. | ||||||||
▲ | topranks 4 days ago | parent | next [-] | |||||||
Why does it have to be decentralised (by which I assume you mean permissionlesss to join as a validator?) The only reason for this - it would seem to me - is the ability to have nobody in control who can be subject to law enforcement. If you need this kind of decentralisation blockchain, and all its inefficiency, is the only choice. Societies should not require such things though. They need to have trustable institutions and intermediaries to function, in finance and many other areas. | ||||||||
| ||||||||
▲ | DennisP 5 days ago | parent | prev [-] | |||||||
Also the capacity is significantly higher with L2 included, and increasing rapidly. With zkrollups and a decentralized sequencer, you basically pay no penalty vs. putting transactions on L1. So far I think the sequencers are centralized for all the major rollups, but there's still a guarantee that transactions will be valid and you can exit to L1. Scaling is improving too. Rollups store compressed data on L1 and only need the full data available for a month or so. That temporary storage is cheaper but currently is still duplicated on all nodes. The next L1 upgrade (in November) will use data sampling, so each node can store a small random fraction of that data, with very low probability of any data being lost. It will also switch to a more efficient data storage structure for L1. With these in place, they can gradually move into much larger L2 capacity, possibly into the millions per second. For the long term, there's also research on putting zk tech on L1, which could get even the L1 transactions up to 10,000/second. |