| ▲ | acuozzo 12 hours ago | |
> social networks need to be required to show how their algorithm works Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining? Would it not be an undue burden to necessitate the release of the weights every time they change? Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability. Wouldn't enforcing algorithmic interpretability additionally be an undue burden? > They must be able to know why a content was served to them. What if the authors of the code are unable to tell you why? | ||
| ▲ | BlueTemplar 10 hours ago | parent [-] | |
The use of black boxes like neural networks is already effectively illegal in some governments for this very reason. | ||