| ▲ | dizzy3gg 6 hours ago |
| Why is the being downvoted? |
|
| ▲ | jermaustin1 6 hours ago | parent | next [-] |
| Because the article shows it isn't Gemini that is the issue, it is the tool calling. When Gemini can't get to a file (because it is blocked by .gitignore), it then uses cat to read the contents. I've watched this with GPT-OSS as well. If the tool blocks something, it will try other ways until it gets it. The LLM "hacks" you. |
| |
| ▲ | lazide 5 hours ago | parent [-] | | And… that isn’t the LLM’s fault/responsibility? | | |
| ▲ | jermaustin1 2 hours ago | parent | next [-] | | How can an LLM be at fault for something? It is a text prediction engine. WE are giving them access to tools. Do we blame the saw for cutting off our finger?
Do we blame the gun for shooting ourselves in the foot?
Do we blame the tiger for attacking the magician? The answer to all of those things is: no. We don't blame the thing doing what it is meant to be doing no matter what we put in front of it. | | |
| ▲ | lazide an hour ago | parent [-] | | It was not meant to give access like this. That is the point. If a gun randomly goes off and shoots someone without someone pulling the trigger, or a saw starts up when it’s not supposed to, or a car’s brakes fail because they were made wrong - companies do get sued all the time. Because those things are defective. |
| |
| ▲ | ceejayoz 5 hours ago | parent | prev [-] | | As the apocryphal IBM quote goes: "A computer can never be held accountable; therefore, a computer must never make a management decision." |
|
|
|
| ▲ | NitpickLawyer 6 hours ago | parent | prev [-] |
| Because it misses the point. The problem is not the model being in a cloud. The problem is that as soon as "untrusted inputs" (i.e. web content) touch your LLM context, you are vulnerable to data exfil. Running the model locally has nothing to do with avoiding this. Nor does "running code in a sandbox", as long as that sandbox can hit http / dns / whatever. The main problem is that LLMs share both "control" and "data" channels, and you can't (so far) disambiguate between the two. There are mitigations, but nothing is 100% safe. |
| |
| ▲ | mkagenius 6 hours ago | parent [-] | | Sorry, I didn't elaborate. But "completely local" meant not doing any network calls unless specifically approved. When llm calls are completely local you just need to monitor a few explicit network calls to be sure. | | |
| ▲ | pmontra 4 hours ago | parent [-] | | In a realistic and useful scenario, how would you approve or deny network calls made by a LLM? |
|
|