Remix.run Logo
kelnos 7 hours ago

Sure, I agree that getting something you want (top post) out of an LLM isn't zero-effort.

But this isn't about effort. This is about genuine humanity. I want to read comments that, in their entirety, came out of the brain of a human. Not something that a human and LLM collaboratively wrote together.

I think the one exception I would make (where maybe the guidelines go too far) is that case of a language barrier. I wouldn't object to someone who isn't confident with their English running a comment by an LLM to help fix errors that might make a comment harder to understand for readers. (Or worse, mean something that the commenter doesn't intend!) It's a privilege that I'm a native English speaker and that so much online discourse happens in English. Not everyone has that privilege.

eek2121 7 hours ago | parent | next [-]

This. LLMs are an autocomplete engine. They aren't curious. Take your curiosities and use your human voice to express them.

The only reason you should be using an LLM on a forum like this is to do language translation. Nobody cares about your grammar skills, and there really isn't a reason to use an LLM outside of that.

LLMs CANNOT provide unique objectivity or offer unknown arguments because they can only use their own training data, based on existing objectivity and arguments, to write a response. So please shut that shit down and be a human.

Signed, a verified/tested autistic old man.

cheers

tkgally 6 hours ago | parent | next [-]

> Nobody cares about your grammar skill

One thing that impressed me about HN when I started participating is how rarely people remark on others' spelling or grammatical mistakes. I myself have been an obsessive stickler about such issues, so I do notice them, but I recognize that overlooking them in others allows more interesting and productive discussions.

xpe 3 hours ago | parent | prev | next [-]

I agree with the above comment on a broad normative (what is good) take: on a forum for humans, yes, please, bring your human self. But there is a lot of room for variety, choice, even self-expression in the be your human self part! Some might prefer using the Encyclopedia Brittanica to supplement an imperfect memory. Others DuckDuckGo. Some might bounce their ideas off friends. Or (gasp) an LLM. Do any of these make the person less human? Nope.

Of course, there are many ways to be more and less intellectually honest, and there is a lot to read on this, such as [1].

Now, on the descriptive / positive claims (what exists), I want to weigh in:

> LLMs are an autocomplete engine.

Like all metaphors, we should ask the "what is the metaphor useful for?" rather than arguing the metaphor itself, which can easily degenerate into a definitional morass. Instead, we should discuss the behavior, something we can observe.

> [LLMs] aren't curious.

Defined how? If put aside questions of consciousness and focus on measuring what we can observe, what do we see? (Think Turing [2], not Chalmers [3].) To what degree are the outputs of modern AI systems distinguishable from the outputs of a human typing on a keyboard?

> LLMs CANNOT provide unique objectivity...

Compared to what? Humans? The phrasing unique objectivity would need to be pinned down more first. In any case, modern researchers aren't interested in vanilla LLMs; they are interested in hybrid systems and/or what comes next.

Intelligence is the core concept here. As I implied in the previous paragraph, intelligence (once we pick a working definition) is something we can measure. Intelligence does not have to be human or even biological. There is no physics-based reason an AI can't one day match and exceed human intelligence.*

> or offer unknown arguments ...

This is the kind of statement that humans are really good at wiggling out of. We move the goalposts. So I'll give one goalpost: modern AI systems have indeed made novel contributions to mathematics. [4]

> because they can only use their own training data, based on existing objectivity and arguments, to write a response.

Yes, when any ML system operates outside of its training distribution, we lose formal guarantees of performance; this becomes sort of an empirical question. It is a fascinating complicated area to research.

Personally, I wouldn't bet against LLMs as being a valuable and capable component in hybrid AI systems for many years. Experts have interesting guesses on where the next "big" innovations are likely to come from.

[1]: Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157), 1124-1131.

[2]: The Turing Test : Stanford Encyclopedia of Philosophy : https://plato.stanford.edu/entries/turing-test/

[3]: The Hard Problem of Consciousness : Internet Encyclopedia of Philosophy : https://iep.utm.edu/hard-problem-of-conciousness/

[4]: FunSearch: Making new discoveries in mathematical sciences using Large Language Models : Alhussein Fawzi and Bernardino Romera Paredes : https://deepmind.google/blog/funsearch-making-new-discoverie...

* Taking materialism as a given.

holdomanoovr 7 hours ago | parent | prev [-]

[dead]

xpe 5 hours ago | parent | prev [-]

> This is about genuine humanity.

The meaning of the word genuine here is pretty pivotal. At its best, genuine might take an expansive view of humanity: our lived experience, our seeking, our creativity, our struggle, in all its forms. But at its worst, genuine might be narrow, presupposing one true way to be human. Is a person with a prosthetic leg less human? A person with a mental disorder? (These questions are all problematic because they smuggle in an assumption.)

Consider this thought experiment. Consider a person who interacts with an LLM, learns something, finds it meaningful, and wants to share it on a public forum. Is this thought less meaningful because of that generative process? Would you really prefer not to see it? Why?

Because you can point to some "algorithmic generation" in the process? With social media, we read algorithmically shaped human comments, many less considered than the thought experiment. Nor did this start with social media. Even before Facebook, there was an algorithm: our culture and how we spread information. Human brains are meme machines, after all.

Think of human output as a process that evolves. Grunts. Then some basic words. Then language. Then writing. Then typing. Why not: "Then LLMs"? It is easy to come up with reasons, but it is harder to admit just how vexing the problem is. If we're willing, it is way for us to confront "what is humanity?".

You might view an LLM as an evolution of this memetic culture. In the case of GPT-OSS 120b, centuries of writing distilled into ~60 GB. Putting aside all the concerns of intellectual property theft, harmful uses, intellectual laziness, surveillance, autonomous weapons, gradual disempowerment, and loss of control, LLMs are quite an amazing technological accomplishment. Think about how much culture we've compressed into them!

As a general tendency, it takes a lot of conversation and refinement to figure out how to communicate a message really well to an audience. What a human bangs out on the first several iterations might only be a fraction of what is possible. If LLMs help people find clearer thinking, better arguments, and/or more authenticity (whatever that means), maybe we should welcome that?

Also, not all humans have the same language generation capacity; why not think of LLMs as an equalizer? You touch on this (next quote), but I am going to propose thinking of this in a broader way...

> I think the one exception I would make...

When I see a narrow exception for an otherwise broad point, I notice. This often means there is more to unpack. At the least, there is philosophical asymmetry. Do they survive scrutiny? Certainly there are more exceptions just around the corner...