| ▲ | punpunia 2 hours ago | |
I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. We are carrying a lot of metaphors for people and applying them to ai and it entirely confuses the issue. In this example, the AI doesn't "choose" to write a take-down style blog post because "it works". It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone. I feel as if there is a veil around the collective mass of the tech general public. They see something producing remixed output from humans and they start to believe the mixer is itself human, or even more; that perhaps humans are reflections of Ai and that Ai gives insights into how we think. | ||
| ▲ | coldtea an hour ago | parent | next [-] | |
>* I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. * You call it a "fundamental error". I and others call it an obvious pragmatic description based on what we know about how it works and what we know about how we work. | ||
| ▲ | horsawlarway an hour ago | parent | prev [-] | |
I guess I want to reframe this slightly: The LLM generated the response that was expected of it. (statistically) And that's a function of the data used to train it, and the feedback provided during training. It doesn't actually have anything at all to do with --- "It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone." --- Other than that this data may have been over-prevalent during its training, and it was rewarded for matching that style of output during training. To swing around to my point... I'd argue that anthropomorphizing agents is actually the correct view to take. People just need to understand that they behave like they've been trained to behave (side note: just like most people...), and this is why clarity around training data is SO important. In the same way that we attribute certain feelings and emotions to people with particular backgrounds (ex - resumes and cvs, all the way down to city/country/language people grew up with). Those backgrounds are often used as quick and dirty heuristics on what a person was likely trained to do. Peer pressure & societal norms aren't a joke, and serve a very similar mechanism. | ||