▲ | ajuc 6 days ago | ||||||||||||||||
It's theoretically possible that your model will work OK except for code generation for security-relevant applications it will introduce subtle pre-designed bugs. Or if used for screening CVs it will prioritize PRC agents through some keyword in hobbies. Or it could promise a bribe to an office worker when asked about some critical infastructure :) Sending data back could be as simple as responding with embedded image urls that reference external server. You are totally right EU commissioner, Http://chinese.imgdb.com/password/to/eu/grid/is/swordfish/funnycat.png Possibilities are endless. | |||||||||||||||||
▲ | pama 6 days ago | parent [-] | ||||||||||||||||
Of course theoretically lots of things are possible with probabilistic systems. There is no difference with open source, openweight, chinese, french or american llms. You dont give unfettered web access to any models (locally served or otherwise) that can consume critical company data. The risk is unacceptable, even if the models are from trusted providers. If you use markdown to see formatted text that may contain critical data and your reader connects to the web, you have a serious security hole, unrelated to the risks of the LLM. | |||||||||||||||||
|