| ▲ | rgoulter 8 hours ago | |
> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. I think this points out a key point.. but I'm not sure the right way to articulate it. A human-written comment may be worth something, but an LLM-generated is cheap/worthless. The nicest phrase capturing the thought I saw was: "I'd rather read the prompt". It's probably just as good to let an LLM generate it again, as it is to publish something written by an LLM. | ||
| ▲ | averynicepen 7 hours ago | parent | next [-] | |
I'll give it a shot. Text, images, art, and music are all methods of expressing our internal ideas to other human beings. Our thoughts are the source, and these methods are how they are expressed. Our true goal in any form of communication is to understand the internal ideas of others. An LLM expresses itself in all the same ways, but the source doesn't come from an individual - it comes from a giant dataset. This could be considered an expression of the aggregate thoughts of humanity, which is fine in some contexts (like retrieval of ideas and information highly represented in the data/world), but not when presented in a context of expressing the thoughts of an individual. LLMs express the statistical summation of everyone's thoughts. It presents the mean, when what we're really interested in are the data points a couple standard deviations away from the mean. That's where all the interesting, unique, and thought provoking ideas are. Diversity is a core of the human experience. --- An interesting paradox is the use of LLMs for translation into a non-native language. LLMs are actively being used to better express an individual's ideas using words better than they can with their limited language proficiency, but for those of us on the receiving end, we interpret the expression to mirror the source and have immediate suspicions on the legitimacy of the individual's thoughts. Which is a little unfortunate for those who just want to express themselves better. | ||
| ▲ | crabmusket 6 hours ago | parent | prev | next [-] | |
I think more people should read Naur's "programming as theory building". A comment is an attempt to more fully document the theory the programmer has. Not all theory can be expressed in code. Both code and comment are lossy artefacts that are "projections" of the theory into text. LLMs currently, I believe, cannot have a theory of the program. But they can definitely perform a useful simulacrum of such. I have not yet seen an LLM generated comment that is truly valuable. Of course, lots of human generated comments are not valuable either. But the ceiling for human comments is much, much higher. | ||
| ▲ | teaearlgraycold 3 hours ago | parent | prev [-] | |
One thing I’ve noticed is that when writing something I consider insightful or creative with LLMs for autocompletion the machine can’t successfully predict any words in the sentence except maybe the last one. They seem to be good at either spitting out something very average, or something completely insane. But something genuinely indicative of the spark of intelligence isn’t common at all. I’m happy to know that while my thoughts are likely not original, they are at least not statistically likely. | ||