Remix.run Logo
iugtmkbdfil834 2 hours ago

Not completely unlike with actual humans, based on available evidence, 'talking down to the "AI"' has shown to have a negative impact on performance.

co_king_3 2 hours ago | parent [-]

This guy is convinced that LLMs don't work unless you specifically anthropomorphize them.

To me, this seems like a dangerous belief to hold.

Kim_Bruning 2 hours ago | parent | next [-]

That feels like a somewhat emotional argument, really. Let's strip it down.

Within the domain of social interaction, you are committing to making Type II errors (False negatives), and divergent training for the different scenarios.

It's a choice! But the price of a false negative (treating a human or sufficeintly advanced agent badly) probably outweighs the cumulative advantages (if any) . Can you say what the advantages might even be?

Meanwhile, I think the frugal choice is to have unified training and accept Type I errors instead (False Positives). Now you only need to learn one type of behaviour, and the consequence of making an error is mostly mild embarrassment, if even that.

co_king_3 2 hours ago | parent [-]

What are you talking about?

logicprog an hour ago | parent | next [-]

It's funny for you to insist that your rhetorical enemies are the only ones that can't internalize and conceptualize a point made to them, when you can't even understand someone else's very basic attempt to break down and understand the very points you were trying to make.

Maybe if you can take a moment away from your blurry, blind, streak of anger and resentment, you could consult the following Wikipedia page and learn:

https://en.wikipedia.org/wiki/Type_I_and_type_II_errors

co_king_3 an hour ago | parent [-]

I know what false positives and false negatives are. I don't understand the user's incoherent response to my comment.

Kim_Bruning an hour ago | parent | prev [-]

TL:DR; "you're gonna end up accidentally being mean to real people when you didn't mean to."

co_king_3 an hour ago | parent [-]

I meant to.

I want a world in which AI users need to stay in the closet.

AI users should fear shame.

Kim_Bruning an hour ago | parent [-]

Reading elsewhere here, you've had some really bad experiences, I think.

iugtmkbdfil834 2 hours ago | parent | prev [-]

Do I need to believe you are real before I respond? Not automatically. What I am initially engaging is a surface level thought expressed via HN.