| |
| ▲ | whimsicalism a day ago | parent | next [-] | | People overly beholden to tried and true 'known' way of addressing a problem space and not considering/belittling alternatives. Many of the things that have been most aggressively 'bitter lesson'ed in the last decade fall into this category. | | |
| ▲ | awesome_dude a day ago | parent [-] | | Like this bug report? The things that have been "disrupted" haven't delivered - Blockchains are still a scam, Food delivery services are worse than before (Restaurants are worse off, the people making the deliveries are worse off), Taxis still needed to go back and vet drivers to ensure that they weren't fiends. | | |
| ▲ | hbbio a day ago | parent | next [-] | | > Blockchains are still a scam Did you actually look at the blockchain nodes implementation as of 2025 and what's in the roadmap?
Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work. (not talking about "coins" and stuff obviously, another debate) | | |
| ▲ | otterley a day ago | parent | next [-] | | > Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work. What are you comparing against? Aren't they slower, less convenient, and less available than, say, DynamoDB or Spanner, both of which have been in full-service, reliable operation since 2012? | | |
| ▲ | derefr a day ago | parent | next [-] | | I think they mean big-D "Distributed", i.e. in the sense that a DHT is Distributed. Decentralized in both a logical and political sense. A big DynamoDB/Spanner deployment is great while you can guarantee some benevolent (or just not-malevolent) org around to host the deployment for everyone else. But technologies of this type do not have any answer for the key problem of "ensure the infra survives its own founding/maintaining org being co-opted + enshittified by parties hostile to the central purpose of the network." Blockchains — and all the overhead and pain that comes with them — are basically what you get when you take the classical small-D distributed database design, and add the components necessary to get that extra property. | |
| ▲ | hbbio a day ago | parent | prev | next [-] | | Ethereum is so good at being distributed than it's decentralized. DynamoDB and Spanner are both great, but they're meant to be run by a single admin. It's a considerably simpler problem to solve. | |
| ▲ | Agingcoder a day ago | parent | prev | next [-] | | Which are both systems with a fair amount of theory behind them ! | |
| ▲ | drdrey a day ago | parent | prev [-] | | the big difference is the trust assumption, anyone can join or leave the network of nodes at any time | | |
| ▲ | charcircuit a day ago | parent [-] | | I think you are being downvoted because Ethereum requires you to stake 32 Eth (about $100k), and the entry queue right now is about 9 days and the exit queue is about 20 days. So only people with enough capital can join the network and it takes quite some time to join or leave as opposed to being able to do it at any time you want. | | |
| ▲ | drdrey a day ago | parent [-] | | ok but these are details, the point is that the operators of the database are external, selfish and fluctuating |
|
|
| |
| ▲ | j16sdiz a day ago | parent | prev [-] | | The traditional way is paper trails and/or WORM (write-once-read-many) devices, with local checksums. You can have multiple replica without extra computation for hash and stuffs. |
| |
| ▲ | whimsicalism 16 hours ago | parent | prev [-] | | idk, sounds like you're ignoring tried and true microeconomic theoretical principles about consumer surplus. better get back to the books before commenting |
|
| |
| ▲ | MrDarcy a day ago | parent | prev [-] | | The ivory tower standing in the way of delivering value I think. | | |
| ▲ | colechristensen a day ago | parent [-] | | To be more specific, goals of perfection where perfection does not at all matter. | | |
| ▲ | johncolanduoni a day ago | parent | next [-] | | What does bothering to read some distributed systems literature have to do with demanding unnecessary perfection? Did NATS have in their docs that JetStream accepted split brain conditions as a reality, or that metadata corruption could silently delete a topic? You could maybe argue the fsync default was a tradeoff, though I think it’s a bad one (not the existence of the flag, just the default being “false”). The rest are not the kind of bugs you expect to see in a 5 year old persistence layer. | | |
| ▲ | stmw a day ago | parent [-] | | Exactly, "losing data from acknowledged writes" is not failing to be perfect, it's failing to deliver on the (advertised) basics of storing your data. |
| |
| ▲ | LaGrange a day ago | parent | prev [-] | | Last time I was at school requirement analysis was a thing, but do go off. |
|
|
|