| Radicle is architecturally local-first: you run your own node, sync repositories from a P2P gossip network, and then everything—browsing code, creating issues, reviewing patches—happens against your local data store. There's no round-trip to a server. Issues and patches are stored as signed Git objects (COBs) that replicate with the repo itself. The network is only involved when you choose to sync. This makes it extremely performant for day-to-day work and fully functional offline. Tangled to my understanding is federated in theory but centralized in practice. It relies on "knots" (servers that host Git repos) and a central AppView at tangled.sh that aggregates the network. Issues and social artifacts live on Personal Data Servers, not locally. While you can self-host a knot, the default experience routes through Tangled's managed infrastructure. The architecture is fundamentally client-server: your operations go over the network to wherever your data lives. |
| That implementation sounds really awesome but it raises a few questions for me (that I didn't immediately see when skimming the landing page although I realize answers might be in the docs somewhere). I found the answer to one of them (how automatic pinning works) which I'll paste here because others are likely to wonder as well. Related, I assume there's a way to block overly large files if you run a seed node? > They can vary in their seeding policies, from public seed nodes that openly seed all repositories to community seed nodes that selectively seed repositories from a group of trusted peers. Suppose I'm A and I collaborate with B, C, ... Z. If I file an issue locally and sync to C, am I able to see if and when that propagates through the network to everyone else? I guess what I'm wondering about is what the latency, reliability, and end user understandability are like when using this to collaborate in practice. Like if I file an issue on GitHub I know that it's globally visible immediately. How does that work here? |
| |
| ▲ | lorenzleutgeb 5 hours ago | parent [-] | | Currently, with Radicle still under active development, we already reach convergence times that are negligible for async collaboration (like working on code or issues). Working on a well-seeded repo, my changes sync to ~10 nodes within a tenth of a second and with ~80 nodes within 3 seconds. This is obviously not fast enough for sync collaboration, like writing on a virtual whiteboard together, but that's also not what Radicle is designed for. Also, if you share larger files (e.g. you attach a screenshot to your issue) the above times might not be a good estimation anymore, but that's the exception for now. It's really strange to see that people assume that peer to peer networks somehow must be slow. In my experience, since everything runs locally, working with Radicle feels way more snappy than any web interface, which has lots of latency on every so-odd click. As the network scales, it'll of course take some care to keep the speed up, but that's known and there are a few models to take inspiration from. | | |
| ▲ | fc417fc802 4 hours ago | parent [-] | | It's not that I assume it must be slow, but rather that from experience being slow is a distinct possibility so I know to ask about it. But I also asked about reliability and visibility into the process. The latter is what I'm most curious about. I'm not meaning to suggest that I have a problem with any of it. It's just that when I see anything P2P that's mutable I start wondering about propagation of changes and ordering of events and how "eventual consistency" presents to end users in practice. Particularly in the face of a node unexpectedly falling off the network. I realize I could browse the docs but I figure it's better to ask here because others likely have similar questions and we're here to discuss the thing after all. | | |
| ▲ | lorenzleutgeb 4 hours ago | parent [-] | | There's `rad sync status` which will show you (for a particular repository) which other nodes have echoed back to you that they have received and verified the most recent state of your namespace of that repository. So, if you expect some other node to have received your changes, you can use this command to verify that. When the user explicitly asks to sync, then by default the process will be considered to have completed successfully as soon as three other nodes have echoed that they have received your changes. This threshold is configurable. Further, one can define a list of nodes that they care particularly much about, in which case the process will only be considered to have completed successfully if all these nodes also signaled that they have received your changes. For anything deeper than that, you'd have to resort to logs. And if you connect your node to the other one your are interested, you can get a pretty good picture of what's going on. If one node "falls off" the network, then the above mechanisms will communicate that to you, or fail after a timeout. With Git repositories, humans establish order explicitly. They push commits which are a DAG. The collaboration around that (mostly discussions on issues, patches) is also stored in and synced by Git, but here, humans do not have to establish order explicitly. Rather, these things, in Radicle lingo called "Collaborative Objects" are CRDTs, so they will merge automatically. Nodes also opportunistically tag operations on these CRDTs with the latest operation they know, to help a bit by establishing an order where possible. | | |
| ▲ | fc417fc802 3 hours ago | parent [-] | | This sounds so much more appealing to me than github and co. Unfortunately I guess there's no multibillion dollar exit in the cards in this case. Has there been any thought about how this might interact with centralized-ish hosting? For example. Suppose a large project chose to use a radicle repo as its "blessed" point of coordination. Being a major project of course there's a mirror on (at minimum) github that points back to a web page (presumably the radicle app) for filing issues, collab, wiki, whatever. So a user that doesn't have any interest in learning about radicle wants to file an issue using the web app. When I glanced at the heartwood repo it seems to be read only with no indication of being able to log in (that's entirely unsurprising ofc). How much work / community welcome / etc is there likely to be for a project to offer a usable web front end, presumably leveraging a solution such as OIDC? Basically being able to "guest" users of centralized platforms in to the project so that they can collaborate with near zero overhead. As a motivating example consider outfits that want to self host a git forge but also want to offer centralized services to users. Communities such as KDE and SDL come to mind. Many of them have ended up migrating to github or gitlab over the years for various reasons but in an alternate reality it didn't have to be that way! I realize I'm effectively asking "do you have thoughts about implementing a partially federated model" but hopefully you can see the real world usecase that's motivating the (otherwise seemingly unreasonable) question. | | |
| ▲ | lorenzleutgeb 3 hours ago | parent [-] | | It's a valid question, and in fact there's quite some interest in adding write features to the web app. The current version of Radicle was designed with one user per node in mind, to get things off the ground. The process of relaxing this is currently ongoing. First, to multiple users per node, which would make use-cases like the one you are sketching viable. What we'd like to avoid is to hand the key to the server, in such case, and instead generate an Ed25519 key in the browser, and sign there, with some web-compatible transport (HTTP? WebSocket?) in between. And that's just a bit more intricate than it sounds. |
|
|
|
|
|