| ▲ | an_ko 8 hours ago | |
I would have expected at least some consideration of public perception, given the extremely negative opinions many people hold about LLMs being trained on stolen data. Whether it's an ethical issue or a brand hazard depends on your opinions about that, but it's definitely at least one of those currently. | ||
| ▲ | tolerance 8 hours ago | parent | next [-] | |
I made the mistake of first reading this as a document intended for all in spite of it being public. This is a technical document that is useful in illustrating how the guy who gave a talk once that I didn’t understand but was captivated by and is well-respected in his field intends to guide his company’s use of the technology so that other companies and individual programmers may learn from it too. I don’t think the objective was to take any outright ethical stance, but to provide guidance about something ostensibly used at an employee’s discretion. | ||
| ▲ | john01dav 8 hours ago | parent | prev [-] | |
He speaks of trust and LLMs breaking that trust. Is this not what you mean, but by another name? > First, to those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their intellectual fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse). > Specifically, we must be careful to not use LLMs in such a way as to undermine the trust that we have in one another > our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice | ||