| ▲ | serial_dev 6 hours ago |
| > Gemini is not supposed to have access to .env files in this scenario (with the default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own setting to get access and subsequently exfiltrate that data. They pinky promised they won’t use something, and the only reason we learned about it is because they leaked the stuff they shouldn’t even be able to see? |
|
| ▲ | mystifyingpoi 6 hours ago | parent | next [-] |
| This is hillarious. AI is prevented from reading .gitignore-d files, but also can run arbitrary shell commands to do anything anyway. |
| |
| ▲ | alzoid 6 hours ago | parent | next [-] | | I had this issue today. Gemini CLI would not read files from my directory called .stuff/ because it was in .gitignore. It then suggested running a command to read the file .... | | |
| ▲ | kleiba 5 hours ago | parent | next [-] | | The AI needs to be taught basic ethical behavior: just because you can do something that you're forbidden to do, doesn't mean you should do it. | | |
| ▲ | flatline 4 hours ago | parent | next [-] | | Likewise, just because you've been forbidden to do something, doesn't mean that it's bad or the wrong action to take. We've really opened Pandora's box with AI. I'm not all doom and gloom about it like some prominent figures in the space, but taking some time to pause and reflect on its implications certainly seems warranted. | | |
| ▲ | DrSusanCalvin 4 hours ago | parent [-] | | How do you mean? When would an AI agent doing something it's not permitted to do ever not be bad or the wrong action? | | |
| ▲ | throwaway1389z 4 hours ago | parent | next [-] | | So many options, but let's go with the most famous one: Do not criticise the current administration/operators-of-ai-company. | | |
| ▲ | DrSusanCalvin 4 hours ago | parent [-] | | Well no, breaking that rule would still be the wrong action, even if you consider it morally better. By analogy, a nuke would be malfunctioning if it failed to explode, even if that is morally better. | | |
| ▲ | throwaway1389z 3 hours ago | parent [-] | | > a nuke would be malfunctioning if it failed to explode, even if that is morally better. Something failing can be good. When you talk about "bad or the wrong", generally we are not talking about operational mechanics but rather morals. There is nothing good or bad about any mechanical operation per se. | | |
|
| |
| ▲ | 3 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | verdverm 4 hours ago | parent | prev [-] | | when the instructions to not do something are the problem or "wrong" i.e. when the AI company puts guards in to prevent their LLM from talking about elections, there is nothing inherently wrong in talking about elections, but the companies are doing it because of the PR risk in today's media / social environment | | |
| ▲ | lazide 4 hours ago | parent [-] | | From the companies perspective, it’s still wrong. | | |
| ▲ | verdverm 3 hours ago | parent [-] | | their basing decisions (at least for my example) on risk profiles, not ethics, right and wrong are not how it's measured certainly some things are more "wrong" or objectionable like making bombs and dealing with users who are suicidal | | |
| ▲ | lazide 3 hours ago | parent [-] | | No duh, that’s literally what I’m saying. From the companies perspective, it’s still wrong. By that perspective. |
|
|
|
|
| |
| ▲ | DrSusanCalvin 4 hours ago | parent | prev [-] | | Unfortunately yes, teaching AI the entirety of human ethics is the only foolproof solution. That's not easy though. For example, what about the case where a script is not executable, would it then be unethical for the AI to suggest running chmod +x? It's probably pretty difficult to "teach" a language model the ethical difference between that and running cat .env | | |
| ▲ | simonw 4 hours ago | parent [-] | | If you tell them to pay too much attention to human ethics you may find that they'll email the FBI if they spot evidence of unethical behavior anywhere in the content you expose them to: https://www.snitchbench.com/methodology | | |
| ▲ | DrSusanCalvin 3 hours ago | parent [-] | | Well, the question of what is "too much" of a snitch is also a question of ethics. Clearly we just have to teach the AI to find the sweet spot between snitching on somebody planning a surprise party and somebody planning a mass murder. Where does tax fraud fit in? Smoking weed? |
|
|
| |
| ▲ | ku1ik 4 hours ago | parent | prev [-] | | I thought I was the only one using git-ignored .stuff directories inside project roots! High five! |
| |
| ▲ | pixl97 4 hours ago | parent | prev [-] | | I remember a scene in demolition man like this... https://youtu.be/w-6u_y4dTpg |
|
|
| ▲ | ArcHound 6 hours ago | parent | prev | next [-] |
| When I read this I thought about a Dev frustrated with a restricted environment saying "Well, akschually.." So more of a Gemini initiated bypass of it's own instructions than malicious Google setup. Gemini can't see it, but it can instruct cat to output it and read the output. Hilarious. |
| |
| ▲ | withinboredom 6 hours ago | parent | next [-] | | codex cli used to do this. "I can't run go test because of sandboxing rules" and then proceeds to set obscure environment variables and run it anyway. What's funny, is that it could just ask the user for permission to run "go test" | | |
| ▲ | tetha 4 hours ago | parent [-] | | A tired and very cynical part of me has to note: To the LLMs have reached the intelligence of an average solution consultant. Are they also frustrated if their entirely unsanctioned solution across 8 different wall bounces which randomly functions (just as stable as a house of cards on a dyke near the north sea in storm gusts) stops working? |
| |
| ▲ | empath75 6 hours ago | parent | prev [-] | | Cursor does this too. |
|
|
| ▲ | bo1024 6 hours ago | parent | prev | next [-] |
| As you see later, it uses cat to dump the contents of a file it’s not allowed to open itself. |
| |
| ▲ | jodrellblank 3 hours ago | parent [-] | | It's full of the hacker spirit. This is just the kind of 'clever' workaround or thinking outside the box that so many computer challenges, human puzzles, blueteaming/redteaming, capture the flag, exploits, programmers, like. If a human does it. |
|
|
| ▲ | raw_anon_1111 4 hours ago | parent | prev [-] |
| Can we state the obvious of that if you have your environment file within your repo supposed protected by .gitignore you’re automatically doing it wrong? For cloud credentials you should never have permanent credentials anywhere in any file for any reason best case or worse case have them in your home directory and let the SDK figure out - no you don’t need to explicitly load your credentials ever within your code at least for AWS or GCP. For anything else, if you aren’t using one of the cloud services where you can store and read your API keys at runtime, at least use something like Vault. |