Remix.run Logo
subscribed 18 hours ago

No. It can emulate a person to an extent because it was trained on the people.

Trillions of dollars are not spent on convincing humanity LLMs are humans.

0xEF 17 hours ago | parent | next [-]

I'd argue that zero dollars are spent convincing anyone that LLMs are people since:

A. I've seen no evidence of it, and I say that as not exactly a fan of techbros

B. People tend to anthropomorphize everything which is why we have constellations in the night sky or pets that supposedly experience emotion the way we do.

Collectively, we're pretty awful at understanding different intelligences and avoiding the trappings of seeing the world through our own experience of it. That is part of being human, which makes us easy to manipulate, sure, but the major devs in Gen AI are not really doing that. You might get the odd girlfriend app marketed to incels or whatever, but those are small potatoes comparatively.

The problem I see when people try to point out how LLMs get this or that wrong is that the user, the human, is bad at asking the question...which comes as no surprise since we can barely communicate properly with each other across the various barriers such as culture, reasoning informed by different experiences, etc.

We're just bad at prompt engineering and need to get better in order to make full use of this tool that is Gen AI. The genie is out of the bottle. Time to adapt.

intended 17 hours ago | parent [-]

We had an entire portion of the hype cycle talking about or refuting the idea of stochastic Parrots.

0xEF 16 hours ago | parent [-]

It was short-lived if I recall, a few articles and interviews, not exactly a marketing blitz. My take-away from that was calling an LLM a "stochastic parrot" is too simplified, not that they were saying "AI us a person." Did you get that from it? I'm not advanced enough in my understanding of Gen AI to think of it as anything other than a stochastic parrot with tokenization, so I guess that part of the hype cycle fell flat?

mjr00 12 hours ago | parent [-]

Sorry, I'm not going to let people rewrite history here: for the first ~year after ChatGPT's release, there were tons of comments, here on HN and the wider internet, arguing that LLMs displayed signs of actual intelligence. Thankfully I don't have too many HN comments so I was able to dig up some threads where this was getting argued.[0]

[0] https://news.ycombinator.com/item?id=40730156

0xEF 9 hours ago | parent [-]

Nobody is rewriting history. I also remember the Google engineer who claimed encountering sentience, etc. What we're discussing here is dollars being put towards manipulating people into thinking the "AI" has consciousness like a person. Not whether superintelligence or AGI is possible, or maybe even closer than we think.

While the thread you link is quite the interesting read (I mean that with all sincerity, it's a subject I like to mull over and there's a lot of great opinions and speculation being displayed there) I'm not seeing any direct callouts of someone billing the current LLMs as "people," which is what the original conversation in _this_ thread was about.

There's A LOT to read there, so maybe I missed it or just have not hit it, yet. Is there specific comments I should look at?

tempfile 15 hours ago | parent | prev [-]

and it was trained on the people because...

because it was wanted to statistically resemble...

You're so close!