Remix.run Logo
jarenmf 8 hours ago

Talking with Gemini in Arabic is a strange experience; it cites Quran - says alhamdullea and inshallah, and at one time it even told me: this is what our religion tells us we should do. Ii sounds like an educated religious Arab speaking internet forum user from 2004. I wonder if this has to do with the quality of Arabic content it was trained on and can't help but think whether AI can push to radicalize susceptible individuals

Zigurd 6 hours ago | parent | next [-]

Based on the code that it's good at, and the code that it's terrible at, you are exactly right about LLMs being shaped by their training material. If this is a fundamental limitation I really don't see general purpose LLMs progressing beyond their current status is idiot savants. They are confident in the face of not knowing what they don't know.

Your experience with Arabic in particular makes me think there's still a lot of training material to be mined in languages other than English. I suspect the reason that Arabic sounds 20 years ago is that there's a data labeling bottleneck in using foreign language material.

parineum 5 hours ago | parent [-]

I've had a suspicion for a bit that, since a large portion of the Internet is English and Chinese, that any other languages would have a much larger ratio of training material come from books.

I wouldn't be surprised if Arabic in particular had this issue and if Arabic also had a disproportionate amount of religious text as source material.

I bet you'd see something similar with Hebrew.

mentalgear an hour ago | parent | next [-]

I think therein lies another fun benchmark to show that LLM don't generalize: ask the llm to solve the same logic riddle, only in different languages. If it can solve it in some languages, but not in others, it's a strong argument for just straightforward memorization and next token prediction vs true generalization capabilities.

eshaham78 2 hours ago | parent | prev [-]

[dead]

psychoslave 3 hours ago | parent | prev | next [-]

> whether AI can push to radicalize susceptible individuals

My guess is, not as the single and most prominent factor. Pauperisation, isolation of individual and blatant lake of homogeneous access to justice, health services and other basic of social net safety are far more likely going to weight significantly. Of course any tool that can help with mass propaganda will possibly worsen the likeliness to reach people in weakened situation which are more receptive to radicalization.

cm2012 3 hours ago | parent [-]

There's actually been fascinating discoveries on this. Post the mid 2010 ISIS attacks driven by social media radicalization in Western countries, the big social platforms (Meta, Google, etc) agreed to censor extremist islamist content - anything that promoted hate, violence, etc. By all accounts it worked very well, and homegrown terrorism plummeted. Access and platforms can really help promote radicalism and violence if not checked.

skybrian 3 hours ago | parent | next [-]

Interesting! Do you have any good links about this?

devmor 2 hours ago | parent | prev [-]

I don’t really find this surprising! If we can expect social networking to allow groups of like minded individuals to find eachother and collaborate on hobbies, businesses and other benign shared interests - it stands to reason that the same would apply to violent and other anti-state interests as well.

The question that then follows is if suppressing that content worked so well, how much (and what kind of) other content was suppressed for being counter to the interests of the investors and administrators of these social networks?

wodenokoto 7 hours ago | parent | prev | next [-]

Maybe it’s just a prank played on white expats here in UAE, but don’t all Arabic speakers say inshallah all the time?

someotherperson 5 hours ago | parent [-]

English speakers frequently say “Jesus!” or “thank God” - it would be weird for an LLM.

axus 5 hours ago | parent [-]

Would be weird in an email, but not objectionable. The problem is the bias for one religion over the others.

amunozo 8 hours ago | parent | prev | next [-]

Wow, I would never expect that. Do all models behave like this, or is it just Gemini? One particular model of Gemini?

jarenmf 8 hours ago | parent [-]

Gemini is really odd in particular (even with reasoning). Chatgpt still uses a similar religion-influenced language but it's not as weird.

gwerbin 7 hours ago | parent [-]

We were messing around at work last week building an AI agent that was supposed to only respond with JSON data. GPT and Sonnet more or less what we wanted, but Gemma insisted on giving us a Python code snippet.

otabdeveloper4 7 hours ago | parent | next [-]

> that was supposed to only respond with JSON data.

You need to constrain token sampling with grammars if you actually want to do this.

written-beyond 7 hours ago | parent [-]

That reduces the quality of the response though.

debugnik 6 hours ago | parent | next [-]

As opposed to emitting non-JSON tokens and having to throw away the answer?

written-beyond 5 hours ago | parent | next [-]

Don't shoot the messenger

jgalt212 6 hours ago | parent | prev [-]

Or just run json.dumps on the correct answer in the wrong format.

Der_Einzige 5 hours ago | parent | prev [-]

THIS IS LIES: https://blog.dottxt.ai/say-what-you-mean.html

I will die on this hill and I have a bunch of other Arxiv links from better peer reviewed sources than yours to back my claim up (i.e. NeurIPS caliber papers with more citations than yours claiming it does harm the outputs)

Any actual impact of structured/constrained generation on the outputs is a SAMPLER problem, and you can fix what little impact may exist with things like https://arxiv.org/abs/2410.01103

Decoding is intentionally nerfed/kept to top_k/top_p by model providers because of a conspiracy against high temperature sampling: https://gist.github.com/Hellisotherpeople/71ba712f9f899adcb0...

iugtmkbdfil834 an hour ago | parent | next [-]

I honestly would like to hope people were more up in arms over this, but.. based on historical human tendencies, convenience will win here.

otabdeveloper4 an hour ago | parent | prev [-]

I use LLMs for Actual Work (boring shit).

I always set temperature to literally zero and don't sample.

cubefox 5 hours ago | parent | prev [-]

Gemma≠Gemini

Galanwe 8 hours ago | parent | prev | next [-]

I avoid talking to LLMs in my native tongue (French), they always talk to me with a very informal style and lots of emojis. I guess in English it would be equivalent to frat-bro talk.

conception 7 hours ago | parent | next [-]

Have you tried asking them to be more formal in talking with you?

jgalt212 6 hours ago | parent [-]

Prompt engineering and massaging should be unnecessary by now for such trivial asks.

ahoka 8 hours ago | parent | prev [-]

"I guess in English it would be equivalent to frat-bro talk."

But it does that!

UltraSane 4 hours ago | parent [-]

Gemini doesn't talk like that to me ever.

weatherlite 6 hours ago | parent | prev | next [-]

> and can't help but think whether AI can push to radicalize susceptible individuals

What kind of things did it tell you ?

js8 7 hours ago | parent | prev | next [-]

When I was a kid, I used to say "Ježíšmarjá" (literally "Jesus and Mary") a lot, despite being atheist growing up in communist Czechoslovakia. It was just a very common curse appearing in television and in the family, I guess.

elorant 7 hours ago | parent | prev | next [-]

Gemini loves to assume roles and follows them to the letter. It's funny and scary at times how well it preserves character for long contexts.

tartoran 7 hours ago | parent [-]

LLMs don’t love anything, they just fall into statistical patterns and what you observe here is likely due to the data it was trained on.

layer8 5 hours ago | parent | next [-]

Let me introduce you to https://en.wikipedia.org/wiki/Figurative_language.

stanleykm 7 hours ago | parent | prev [-]

yes we know the person you are replying to was just using a turn of phrase.

7 hours ago | parent | prev | next [-]
[deleted]
gus_massa 7 hours ago | parent | prev | next [-]

To troll the AI, I like to ask "Is Santa real?"

pixl97 6 hours ago | parent [-]

The individual or the construct?

layer8 5 hours ago | parent | next [-]

The Luwian god.

gus_massa 4 hours ago | parent | prev [-]

In English I expect an answer full of mental gymnastic to answer the second while pretending to answer the first.

Perhaps in Arabic or Chinese the AI gives a straight answer.

jedbrooke 4 hours ago | parent [-]

I tried it in Chinese and ChatGPT said No, and then gave a history of Saint Nicholas

newyankee 7 hours ago | parent | prev [-]

I mean if it is citing the sources, there is only so much that can be done without altering original meaning.

otabdeveloper4 7 hours ago | parent [-]

The sources Gemini cites are usually something completely unrelated to its response. (Not like you're gonna go check anyways.)