| ▲ | lowsong 4 hours ago | |
Please don't anthropomorphise. These are statistical text prediction models, not people. An LLM cannot be "deceptive" because it has no intent. They're not intelligent or "smart", and we're not "teaching". We're inputting data and the model is outputting statistically likely text. That is all that is happening. If this is useful in it's current form is an entirely different topic. But don't mistake a tool for an intelligence with motivations or morals. | ||