| ▲ | rando77 7 hours ago |
| Perhaps we need reputation on the network layer? Without it being tied to a particular identity. It would require it not to be easy to farm (Entropy detection on user behaviour perhaps and clique detection). |
|
| ▲ | loa_in_ 5 hours ago | parent [-] |
| How does one make sure the implementation is sufficient and complete? It feels like assuming total knowledge of the world, which is never true. How many false positives and false negatives do we tolerate? How does it impact a person? |
| |
| ▲ | rando77 5 hours ago | parent [-] | | I'm not sure. We can use LLMs to try
out different settings/algorithms and see what it is like to have it on a social level before we implement it for real. | | |
| ▲ | Imustaskforhelp 5 hours ago | parent [-] | | Perhaps but I am not entirely optimistic about LLM's in this context but hey perhaps freedom to do this and then doing it might make a dent after all, one can never know until they experiment I guess | | |
| ▲ | rando77 4 hours ago | parent [-] | | Fair, I don't know how valuable it would be. I think LLMs would only get you so far. They could be tried in games or small human contexts . We would need a funding model that rewarded this though. That is hard too though. |
|
|
|