Remix.run Logo
oneeyedpigeon 7 hours ago

We've enjoyed a certain period (at least a couple of decades) of global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life, from open-source to journalism and job interviews.

theshrike79 7 hours ago | parent | next [-]

I've been trying to manifest Web of Trust coming back to help people navigate towards content that's created by humans.

A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.

jacquesm 6 hours ago | parent | next [-]

You need a very complex weighing and revocation mechanism because once one bad player is in your web of trust they can become a node along which both other bad players and good players alike can join.

thephyber 3 hours ago | parent | next [-]

Trust in the real world is not immutable. It is constantly re-evaluated. So the Web of Trust concept should do this as well.

Also, there needs to be some significant consequence to people who are bad actors and, transitively, to people who trust bad actors.

The hardest part isn’t figuring out how to cut off the low quality nodes. It’s how to incentivize people to join a network where the consequences are so high that you really won’t want to violate trust. It can’t simply be a free account that only requires an a verifiable email address. It will have to require a significant investment in verifying real world identity, preventing multiple accounts, reducing account hijackings, etc. those are all expensive and high friction.

embedding-shape 6 hours ago | parent | prev | next [-]

Build a tree, cut the tree at the first link, now you get rid of all of them. Will have some collateral damage though, but maybe safe to assume actually "good players" can rejoin at another maybe more stable leaf

jacquesm 6 hours ago | parent [-]

It's a web, not a tree... so this is really not that simple.

embedding-shape 6 hours ago | parent [-]

Yeah, that's the problem, and my suggestion is to change it from a web to a tree instead, to solve that issue.

jacquesm 4 hours ago | parent | next [-]

That does not work because you won't have multiple parties vouching for a new entrant. That's the whole reason a web was chosen instead of a tree in the first place. Trees are super fragile in comparison, bad actors would have a much bigger chance of going undetected in a tree like arrangement.

theshrike79 6 hours ago | parent | prev [-]

What is a web if not multiple trees that have interconnected branches? :)

embedding-shape 5 hours ago | parent [-]

In the end, it's all lists anyways :)

theshrike79 6 hours ago | parent | prev [-]

Then I can see who added that bad player and cut off everyone who trusted them (or decrease the trust level if the system allows that).

thephyber 3 hours ago | parent | prev | next [-]

I would go even further. I only want to see content created by people who are in a chain of trust with me.

AI slop is so cheap that it has created a blight on content platforms. People will seek out authentic content in many spaces. People will even pay to avoid the mass “deception for profit” industry (eg. Industries where companies bot ratings/reviews to profit and where social media accounts are created purely for rage bait / engagement farming).

But reputation in a WoT network has to be paramount. The invite system needs a “vouch” so there are consequences to you and your upstream vouch if there is a breach of trust (eg. lying, paid promotions, spamming). Consequences need to be far more severe than the marginal profit to be made from these breaches.

IsTom 6 hours ago | parent | prev [-]

Unfortunately trust isn't transitive.

embedding-shape 6 hours ago | parent | prev | next [-]

> global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life

I don't think it's coming to an end. It's getting more difficult, yes, but not impossible. Currently I'm working on a game, and since I'm not an artist, I pay artists to create the art. The person I'm working closest with I have basically no idea who they are, except their name, email and the country they live in. Otherwise it's basically "they send me a draft > I review/provide feedback > Iterate until done > I send them money", and both of us know basically nothing of the other.

I agree that trust in the individual is becoming more important, but it's always been one of the most important thing for collaborations or anything that involves other human beings. We've tried to move that trust to other system, but seems instead we're only able to move the trust to the people building and maintaining those systems, instead of getting rid of it completely.

Maybe, "trust" is just here to stay, and we all be better off as soon as we start to realize this, and reconnect with the people around us and connect with the people on the other side of the world.

willis936 5 hours ago | parent | next [-]

How do you know it's a person on the other end? Would you even see a difference if you had a computer generate that art?

These are very important questions that cut to the heart of "what is art".

embedding-shape 4 hours ago | parent [-]

> How do you know it's a person on the other end? Would you even see a difference if you had a computer generate that art?

Unless AI companies already developed and launched plugins/extensions for people to do something that looks like hand drawn sketches inside of Clip Studio, and suddenly got a lot better at understanding prompts (including having inspiration of their own), then I'm pretty sure it's a human.

I don't think I'd get to see in-progress sketches and it wouldn't be as good at understanding what I wanted to have had changes then. I've used various generative AI image generators (latest one Qwen Image 2511 and a whole bunch of others) and none of them, including with "prompt enhancements" can take very vague descriptions of "I want it to feel like X" or "I'm not sure about Y but something like Z" and turn it into something that looks acceptable. At least not yet.

And because I've spent a lot of time with various generative image making processes and models, I'm fairly confident I'd recognize if that was what was happening.

willis936 2 hours ago | parent [-]

Sure, it's true today. Entertain the hypothetical though because this is what the trillion dollar rush is aspiring to do in the near future. We should be thinking about our answers now.

embedding-shape an hour ago | parent [-]

Answers to what? Do I care what tools the artist use as long I get the results I want? I don't understand what you see as the issue, that I somehow think I'd be working with a human but it was a machine?

thephyber 3 hours ago | parent | prev | next [-]

I think it absolutely is coming to an end in lots of ways.

Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

The better AI gets at slop and controlling bots to create slop which is indistinguishable from human content, the less people will trust content on those platforms.

Your trust relationship with your artist almost certainly was based on something other than just contact info. Usually you review a portfolio, a professional profile, and you start with a small project to limit your downside risk. This tentative relationship and phased stages where trust is increased is how human trust relationships have always worked.

embedding-shape 3 hours ago | parent [-]

> Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

But for a long time, unrelated to AI. When Amazon was first available here in Spain (don't remember exactly what year, but before LLMs for sure), the amount of fraudulent reviews filling the platform was already noticeable at that point.

That industry you're talking about might have gotten new wings with LLMs, but it wasn't spawned by LLMs, it existed long time before that.

> the less people will trust content on those platforms.

Maybe I'm jarred from using the internet from a young age, but both me and my peers basically has a built-in mistrust against random stuff we see on the internet, at least compared to our parents and our younger peers.

"Don't believe everything you see on the internet" been a mantra almost for as long as the internet has existed, maybe people forgot and needed an reminder, but it was never not true.

thephyber 3 hours ago | parent [-]

LLMs reduce the marginal cost per unit of content.

When snail mail had a cost floor of $0.25 for the price of postage, email was basically free. You might get 2-3 daily pieces of junk mail in your house’s mailbox, but you would get hundreds or thousands in your email inbox. Slop comes at scale. LLMs didn’t invent spam, but they are making it easier to create more variants of it, and possibly ones that convert better than procedurally generated pieces.

There’s a difference between your cognitive brain and your lizard brain. You can tell yourself that mantra, but still occasionally fall prey to spam content. The people who make spam have a financial incentive to abuse the heuristics/signals you use to determine the authenticity of a piece of content in the same way cheap knockoffs of Rolex watches, Cartier jewelry, or Chanel handbags have to make the knockoffs appear as authentic as possible.

contrast 5 hours ago | parent | prev [-]

Your tone is disagreement, but it's not clear why?

There is an individual who you trust to do good work, and who works well with you. They're not anonymous. Addressing the topic of this thread, you know (or should know) that it is not AI slop.

That is a significant amount of knowledge and trust in an individual, and the very point I thought the GP was making.

agumonkey 6 hours ago | parent | prev | next [-]

trust in trust.. as programmer would say

the web brought instant infinite 'data', we used to have limits, limits that would kinda ensure the reality of what is communicated.. we should go back to that it's efficient

4 hours ago | parent | prev | next [-]
[deleted]
globular-toast 5 hours ago | parent | prev [-]

Some projects, like Linux (the kernel) have always been developed that way. Linus has described the trust model in the kernel to be very much "web of trust". You don't just submit patches directly to Linus, you submit them to module maintainers who are trusted by subsystem maintainers and who are all ultimately, indirectly trusted by the branch maintainer (Linus).