▲ | woodruffw 8 months ago | ||||||||||||||||
> Sure, but let's get back to the use case we are exploring here: Do you trust your doctor's contact info for their sibling? Could it provide you utility? What about your doctor's contact info for the front desk of their practice? Not inherently: for all I know, my doctor is technically illiterate and their contact book is thoroughly padded with spam. The problem of trust is that trust isn't a boolean; it's a set of policies that vary by principal and action. It's very hard to encode that in a truly general way, which is why modern cryptographic application design orthodoxy dictates that applications should try to solve exactly one kind of trust at a time. | |||||||||||||||||
▲ | thomastjeffery 8 months ago | parent [-] | ||||||||||||||||
> Not inherently: for all I know, my doctor is technically illiterate and their contact book is thoroughly padded with spam. Sure, but that leads us to the next question: Could it provide you utility? > The problem of trust is that trust isn't a boolean That's also the utility of trust. Most of the information we want to reason about is not context-free. So far, no one has figured out a reliable way to offload context-sensitive work to computation. The next best thing is to offload as much context-free work as possible, and provide the user a direct interface to the remaining context-sensitive work. By organizing our social networks as attestations of [dis]trust, we can deliver the uncomputable question of trustworthiness closer to the user. By delivering that question to many users, we can collaborate efficiently on that work. | |||||||||||||||||
|