Remix.run Logo
wredcoll 6 days ago

> Why not have a central bank currency that can be traded on the blockchain, especially since converting it to real money will still entail KYC?

Because literally the only point is to avoid the existing banking system and you can do that with a postures database with much less cpu involved.

sunshine-o 6 days ago | parent | next [-]

> Because literally the only point is to avoid the existing banking system and you can do that with a postures database with much less cpu involved.

Ethereum is actually very low resource intensive nowadays.

You can run a validator node on a RPI, a full sync node on a Intel N100 minipc with a big fast SSD and the "light clients" can probably run on something very small.

I have seen banks having to bring semi-trailers full of diesel generators to plug them to their mainframe because the current requirements were too high for the grid during big batch jobs.

ChadNauseam 5 days ago | parent | next [-]

I like crypto (I'm formerly in the industry), but that's not quite a fair comparison.

1. Running a validator is inexpensive in terms of compute, but there are 1,000,000 validators or something, which adds up to a lot of CPU usage. Of course, I think it's insanely awesome that you can run some code on Ethereum and it'll be replicated on 1,000,000 independently-operated machines, but it's not a very CPU-efficient strategy. 2. Banks doing those batch jobs probably had much higher TPS than ethereum.

sunshine-o 5 days ago | parent [-]

> Banks doing those batch jobs probably had much higher TPS than ethereum.

Yes the platform running in most banks, usually built on what we call "mainframes", is still mind blowing and with incredible performance. Also just one of those CPU is about the price of a house...

Also the requirements I cited is for running an Ethereum mainnet "Layer 1" node. And most "TPS" happens on the layer 2s anyway.

So it is hard to compare technically. But one thing for sure is becoming an active participant in the Ethereum mainnet has a very low barrier. They got rid of the whole intensive "Proof of work" part about 5 years ago. For a full sync node the waste is more at the bandwidth and disk levels.

topranks 4 days ago | parent | prev [-]

Just because they ditch the proof-of-work doesn’t make it efficient.

The blockchain structure, the validation mechanism etc are still a very inefficient way to do general compute or database type functions.

DennisP 6 days ago | parent | prev | next [-]

There's not that much CPU involved. Most of the stablecoins are on Ethereum, and I think the rest are on other proof-of-stake platforms, not Bitcoin.

jakewins 5 days ago | parent [-]

Ethereum is able to process something like 150 transactions per second, using about 1,000,000 validator machines.

Postgres running on a single Raspberry Pi is something like 200 TPC-B read/write transactions per second.

Saying Ethereum “is not using very much CPU” is baffling to me. It is the state-of-the-art in this regard, and it uses something like six orders of magnitude more CPU than a normal database running a ledger workload?

rollcat 5 days ago | parent | next [-]

First things first, I'm a crypto-sceptic - to put it in the mildest terms possible.

You're spot on with CPU usage. However: how would you design a RasPi-efficient, fault-tolerant, decentralised ledger with strict ordering and a transparency log?

Consider CAP. Existing banking systems choose partition tolerance (everyone does their own thing all the time basically), and eventual consistency via peering - which is why all settlements are delayed (in favour of fraud detection / mitigation), but you get huge transaction throughput, very high availability, and power efficiency. (Any existing inefficiencies can and should be optimised away, I guess we can blame complacency.)

The system works based on distributed (each bank) but centralised (customer->bank) authority, held up by regulations, capital, and identity verification.

Online authority works in practice - we collectively trust all the Googles, Apples, etc run our digital lives. Cryptocurrency enthusiasts trust the authors and contributors of the software, CPU/OS vendors, so it's not like we're anywhere near an absolute zero of authority.

Online identity verification objectively sucks, so that is out the window. I guess this could work by individual users delegating to a "host" node (which is what is already happening with managed wallets), and host nodes peering with each other based on mutual trust. Kinda like Mastodon, email, or even autonomous systems - the backbone of the Internet itself.

Just a brain dump.

topranks 4 days ago | parent | next [-]

Why does it have to be decentralised (by which I assume you mean permissionlesss to join as a validator?)

The only reason for this - it would seem to me - is the ability to have nobody in control who can be subject to law enforcement.

If you need this kind of decentralisation blockchain, and all its inefficiency, is the only choice.

Societies should not require such things though. They need to have trustable institutions and intermediaries to function, in finance and many other areas.

rollcat 4 days ago | parent [-]

> Societies should not require such things though. They need to have trustable institutions and intermediaries to function, in finance and many other areas.

...which is more or less the same conclusion that I've arrived at by the end.

DennisP 5 days ago | parent | prev [-]

Also the capacity is significantly higher with L2 included, and increasing rapidly.

With zkrollups and a decentralized sequencer, you basically pay no penalty vs. putting transactions on L1. So far I think the sequencers are centralized for all the major rollups, but there's still a guarantee that transactions will be valid and you can exit to L1.

Scaling is improving too. Rollups store compressed data on L1 and only need the full data available for a month or so. That temporary storage is cheaper but currently is still duplicated on all nodes. The next L1 upgrade (in November) will use data sampling, so each node can store a small random fraction of that data, with very low probability of any data being lost. It will also switch to a more efficient data storage structure for L1.

With these in place, they can gradually move into much larger L2 capacity, possibly into the millions per second. For the long term, there's also research on putting zk tech on L1, which could get even the L1 transactions up to 10,000/second.

ricericerice 4 days ago | parent | prev [-]

there's 1,000,000 validators (defined as a public key), but you can run multiple validators per machine. Most estimates that crawl the p2p network to index nodes comes out at like ~20,000 machines

doesn't invalidate your point but it at least shaves off a few orders of magnitude

and a single PG node is not a fair comparison, we're talking 100% uptime global networks. Visa does about 70,000 transactions per second - how many servers do you think they run across their infra?

wredcoll 4 days ago | parent [-]

> Visa does about 70,000 transactions per second - how many servers do you think they run across their infra?

So uh, how do we scale etherum's 150tps to that?

DennisP 4 days ago | parent [-]

ZK rollups which greatly compress data on chain without losing security guarantees, combined with temporary storage using data sampling so each node only has to store a small portion of it. The zk rollups are live and support significantly more than 150tps today, and the data sampling goes live in November. There's a lot more work to be done but that puts the major pieces in place.

algo_lover 6 days ago | parent | prev [-]

But with multiple parties involved, who has the rights to read and write to the postgres instance? How do we make sure transactions were not forged? How do we know data at rest is not being tampered with?

Blockchain solves that. Newer blockchain protocols especially an L1 is much faster, easier on the environment, and provides all the immutability, transparency, and traceability benefits.

oblio 6 days ago | parent | next [-]

You know you can just use regular cryptography to validate data, right?

Also, you always have to trust someone, in this case Stripe.

Regarding L1 blockchains, how exactly do they solve the speed problem for a distributed global database that needs to be replicated everywhere for the security guarantees to actually work?

What do they forgo out of https://en.m.wikipedia.org/wiki/CAP_theorem ?

packetlost 6 days ago | parent | next [-]

Pretty much always A. In systems like this, it's better to deny transactions than allow inconsistencies.

jekrb 5 days ago | parent | prev [-]

non-potatoe hardware and elbow grease in the software https://github.com/anza-xyz/agave

oblio 4 days ago | parent [-]

Is there a white paper for what would be a revolutionary discovery in the field of software?

topranks 4 days ago | parent | prev [-]

We need to trust those running the system.

Societies cannot function without trusted intermediaries, in finance and many other things.

If we are in a democracy then the government regulates such organisations and should punish those who do not comply.

Blockchain doesn’t scale as a replacement so the point is moot.