Remix.run Logo
skybrian 18 hours ago

Yes, arguing with an LLM in this way is a waste of time. It’s not a person. If it does anything weird, start a new conversation.

bradly 18 hours ago | parent | next [-]

> arguing with an LLM in this way is a waste of time

I wasn't arguing. I was asking it what it thought it was doing because I was assumed. The waste of time was from before this up to this point. I could have given up at 30 minutes, or an hour, but these darn llms are always so close and maybe just one more prompt...

Yizahi 14 hours ago | parent | next [-]

LLM programs can't describe what they are doing. The tech doesn't allow this. LLM can generate you a text which will resemble what LLM would be doing if that was hypothetically possible. A good example has been published by Anthropic recently - they program LLM to add two integers. It outputs correct answer. Then they program it to write steps which LLM executed to do that addition. LLM of course starts generating the primary school algorithm, with adding one pair of digits, carry 1 if needed, adding next pair of digits, add 1, combine result, then next digits etc. But in reality it calculates addition using probabilities, like any other generated tokens. Anthropic even admitted it in that same article, that LLM was bullshitting them.

Same with your query, it just generated you a most likely text which was in the input data. It is unable to output what it actually did.

subscribed 16 hours ago | parent | prev [-]

The look up how LLM is generating its answers :)

Next time just rephrase your problem.

bradly 11 hours ago | parent [-]

> Next time just rephrase your problem.

Don't you need to know if the llm is wrong to rephrase your problem? How are people asking the llm to do something they do not know how to do, then being able to know the answer is incorrect?

skybrian 7 hours ago | parent [-]

Sometimes you can try it (does the code work?) Or do your own searches, which will be easier once you know the relevant keywords and what to look for.

I agree that it’s kinda useless to consult an unreliable hint engine when you don’t have a way of verifying the output.

_flux 14 hours ago | parent | prev | next [-]

I usually just modify the message before it goes off the rails, taking into consideration how it failed.

tempfile 17 hours ago | parent | prev [-]

> It's not a person

and yet we as a species are spending trillions of dollars in order to trick people that it is very very close to a person. What do you think they're going to do?

subscribed 16 hours ago | parent [-]

No. It can emulate a person to an extent because it was trained on the people.

Trillions of dollars are not spent on convincing humanity LLMs are humans.

0xEF 16 hours ago | parent | next [-]

I'd argue that zero dollars are spent convincing anyone that LLMs are people since:

A. I've seen no evidence of it, and I say that as not exactly a fan of techbros

B. People tend to anthropomorphize everything which is why we have constellations in the night sky or pets that supposedly experience emotion the way we do.

Collectively, we're pretty awful at understanding different intelligences and avoiding the trappings of seeing the world through our own experience of it. That is part of being human, which makes us easy to manipulate, sure, but the major devs in Gen AI are not really doing that. You might get the odd girlfriend app marketed to incels or whatever, but those are small potatoes comparatively.

The problem I see when people try to point out how LLMs get this or that wrong is that the user, the human, is bad at asking the question...which comes as no surprise since we can barely communicate properly with each other across the various barriers such as culture, reasoning informed by different experiences, etc.

We're just bad at prompt engineering and need to get better in order to make full use of this tool that is Gen AI. The genie is out of the bottle. Time to adapt.

intended 15 hours ago | parent [-]

We had an entire portion of the hype cycle talking about or refuting the idea of stochastic Parrots.

0xEF 15 hours ago | parent [-]

It was short-lived if I recall, a few articles and interviews, not exactly a marketing blitz. My take-away from that was calling an LLM a "stochastic parrot" is too simplified, not that they were saying "AI us a person." Did you get that from it? I'm not advanced enough in my understanding of Gen AI to think of it as anything other than a stochastic parrot with tokenization, so I guess that part of the hype cycle fell flat?

mjr00 11 hours ago | parent [-]

Sorry, I'm not going to let people rewrite history here: for the first ~year after ChatGPT's release, there were tons of comments, here on HN and the wider internet, arguing that LLMs displayed signs of actual intelligence. Thankfully I don't have too many HN comments so I was able to dig up some threads where this was getting argued.[0]

[0] https://news.ycombinator.com/item?id=40730156

0xEF 8 hours ago | parent [-]

Nobody is rewriting history. I also remember the Google engineer who claimed encountering sentience, etc. What we're discussing here is dollars being put towards manipulating people into thinking the "AI" has consciousness like a person. Not whether superintelligence or AGI is possible, or maybe even closer than we think.

While the thread you link is quite the interesting read (I mean that with all sincerity, it's a subject I like to mull over and there's a lot of great opinions and speculation being displayed there) I'm not seeing any direct callouts of someone billing the current LLMs as "people," which is what the original conversation in _this_ thread was about.

There's A LOT to read there, so maybe I missed it or just have not hit it, yet. Is there specific comments I should look at?

tempfile 13 hours ago | parent | prev [-]

and it was trained on the people because...

because it was wanted to statistically resemble...

You're so close!