| Hashiverse (https://github.com/hashiverse/hashiverse) is an open-source decentralized social network protocol where Sybil
resistance, rate limiting, peer reputation, and content moderation all fall out of one design choice: every action carries a
proof-of-work cost calibrated to how much abuse it could cause. No central servers, no DNS dependency, no registration authority,
no moderation team. Rust core, WASM browser client, volunteers on $5 VPS machines.
Twitter-shaped (posts, follows, hashtags, timelines). The design problem that usually kills these projects on day one is Sybil
resistance without a gatekeeper, so that is what I most want feedback on. Signatures and encryption are conventional (ed25519 +
ML-DSA + FN-DSA, ChaCha20Poly1305, Blake3). The interesting surface is how every protocol action is priced in proof-of-work
calibrated to its abuse potential.
Shared primitive: a data-dependent chain over 17 hash algorithms. 5 rounds, each selecting one of 17 algorithms (Blake2s/b,
SHA-2/3 at 256/384/512, Keccak-256/384/512, Groestl-256/512, Whirlpool, Skein-256/512, Blake3) and applying it 1 or 2 times. The
algorithm index and repetition count for round N come from bytes of round N-1's output, so dispatch is data-dependent and only
resolved at runtime.
Honest prior art: Evan Duffield's X11 (Dash, 2014) chained 11 SHA-3 finalists with exactly this thesis. X11 ASICs (Baikal,
iBeLink) shipped by 2016. Multi-hash chaining delays ASICs, it does not prevent them. What's different here is data-dependent
dispatch (X11's pipeline is fixed) and variable repetition count. The honest question is not "is this ASIC-proof?" but "how much
delay does data-dependent dispatch buy, and what software-update cadence should a protocol with no upgrade authority plan for?"
Layer 1: Server-ID PoW (DHT membership). Generating a server identity means grinding a salt with the server's public keys through
the chained hash until the derived 256-bit Kademlia ID has enough leading zero bits. Hours on commodity hardware per identity.
Two compounding mitigations: bucket location IDs rotate on a monthly time epoch (the keyspace region around a user shifts
deterministically), and prolific users fan across more buckets as the hierarchy subdivides under load. An attacker pays admission
PoW against a moving target whose surface grows with the target's prolificness.
Layer 2: RPC PoW. Every RPC carries a PoW over (timestamp, salt, payload, client ID, destination server ID). Under-threshold
requests are rejected before payload parse. Timestamp pinning prevents replay; ID pinning prevents reuse across (client, server)
pairs. Knock-on: because the destination server's ID is in the PoW, servers handling real load accumulate a routing-table
reputation. A fresh Sybil has no traffic history; to affect the routing table they must either be useful or grind their own fake
reputation by paying RPC PoW for every fabricated client request. Useful work becomes a Sybil deterrent.
Post submission is a sub-case: two-phase Claim/Commit so one cheap PoW cannot deliver a huge payload. Submission difficulty
scales with recent posting frequency.
Layer 3: Per-feedback PoW. No central tally. Every signal (like, dislike, hate speech, spam, CSAM, etc.) is a PoW-stamped entry
over (post_id, feedback_type), so a PoW cannot be reused across signals or posts. We use straightforward statistics to infer the
total number of feedback submissions as the reciprocal of the unlikelihood of the globally-maximum PoW per (post_id,
feedback_type) pair. That maximum is healed by clients noticing discrepancies, not by server-to-server gossip.
If any of this resonates, or you spot something I've gotten wrong, I would love to hear it. PRs welcome.
-- Jimme Jardine
|