▲ | GeneralMaximus 4 days ago | |
When dealing with organizations that hold a disproportionate amount of power over your life, it's essential to view them in a somewhat cynical light. This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you. It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology. I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them. It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society. [1]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo... |