| ▲ | mkagenius 6 hours ago |
| Sooner or later I believe, there will be models which can be deployed locally on your mac and are as good as say Sonnet 4.5. People should shift to completely local at that point. And use sandbox for executing code generated by llm. Edit: "completely local" meant not doing any network calls unless specifically approved. When llm calls are completely local you just need to monitor a few explicit network calls to be sure.
Unlike gemini then you don't have to rely on certain list of whitelisted domains. |
|
| ▲ | KK7NIL 6 hours ago | parent | next [-] |
| If you read the article you'd notice that running an LLM locally would not fix this vulnerability. |
| |
| ▲ | pennomi 6 hours ago | parent | next [-] | | Right, you’d have to deny the LLM access to online resources AND all web-capable tools… which severely limits an agent’s capabilities. | |
| ▲ | yodon 6 hours ago | parent | prev [-] | | From the HN guidelines[0]: >Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that". [0]: https://news.ycombinator.com/newsguidelines.html | | |
|
|
| ▲ | kami23 6 hours ago | parent | prev | next [-] |
| I've been repeating something like 'keep thinking about how we would run this in the DC' at work. The cycles of pushing your compute outside the company and then bringing it back in once the next VP/Director/CTO starts because they need to be seen as doing something, and the thing that was supposed to make our lives easier is now very expensive... I've worked on multiple large migrations between DCs and cloud providers for this company and the best thing we've ever done is abstract our compute and service use to the lowest common denominator across the cloud providers we use... |
|
| ▲ | pmontra 6 hours ago | parent | prev | next [-] |
| That's not easy to accomplish. Even a "read the docs at URL" is going to download a ton of stuff. You can bury anything into those GETs and POSTs. I don't think that most developers are going to do what I do with my Firefox and uMatrix, that is whitelisting calls. And anyway, how can we trust the whitelisted endpoint of a POST? |
|
| ▲ | tcoff91 6 hours ago | parent | prev | next [-] |
| At the time that there's something as good as sonnet 4.5 available locally, the frontier models in datacenters may be far better. People are always going to want the best models. |
|
| ▲ | api 6 hours ago | parent | prev | next [-] |
| Can't find 4.5, but 3.5 Sonnet is apparently about 175 billion parameters. At 8-bit quantization that would fit on a box with 192 gigs of unified RAM. The most RAM you can currently get in a MacBook is 128 gigs, I think, and that's a pricey machine, but it could run such a model at 4-bit or 5-bit quantization. As time goes on it only gets cheaper, so yes this is possible. The question is whether bigger and bigger models will keep getting better. What I'm seeing suggests we will see a plateau, so probably not forever. Eventually affordable endpoint hardware will catch up. |
|
| ▲ | fragmede 6 hours ago | parent | prev | next [-] |
| it's already here with qwen3 on a top end Mac and lm-studio. |
|
| ▲ | dizzy3gg 6 hours ago | parent | prev [-] |
| Why is the being downvoted? |
| |
| ▲ | jermaustin1 6 hours ago | parent | next [-] | | Because the article shows it isn't Gemini that is the issue, it is the tool calling. When Gemini can't get to a file (because it is blocked by .gitignore), it then uses cat to read the contents. I've watched this with GPT-OSS as well. If the tool blocks something, it will try other ways until it gets it. The LLM "hacks" you. | | |
| ▲ | lazide 5 hours ago | parent [-] | | And… that isn’t the LLM’s fault/responsibility? | | |
| ▲ | jermaustin1 2 hours ago | parent | next [-] | | How can an LLM be at fault for something? It is a text prediction engine. WE are giving them access to tools. Do we blame the saw for cutting off our finger?
Do we blame the gun for shooting ourselves in the foot?
Do we blame the tiger for attacking the magician? The answer to all of those things is: no. We don't blame the thing doing what it is meant to be doing no matter what we put in front of it. | | |
| ▲ | lazide an hour ago | parent [-] | | It was not meant to give access like this. That is the point. If a gun randomly goes off and shoots someone without someone pulling the trigger, or a saw starts up when it’s not supposed to, or a car’s brakes fail because they were made wrong - companies do get sued all the time. Because those things are defective. |
| |
| ▲ | ceejayoz 5 hours ago | parent | prev [-] | | As the apocryphal IBM quote goes: "A computer can never be held accountable; therefore, a computer must never make a management decision." |
|
| |
| ▲ | NitpickLawyer 6 hours ago | parent | prev [-] | | Because it misses the point. The problem is not the model being in a cloud. The problem is that as soon as "untrusted inputs" (i.e. web content) touch your LLM context, you are vulnerable to data exfil. Running the model locally has nothing to do with avoiding this. Nor does "running code in a sandbox", as long as that sandbox can hit http / dns / whatever. The main problem is that LLMs share both "control" and "data" channels, and you can't (so far) disambiguate between the two. There are mitigations, but nothing is 100% safe. | | |
| ▲ | mkagenius 6 hours ago | parent [-] | | Sorry, I didn't elaborate. But "completely local" meant not doing any network calls unless specifically approved. When llm calls are completely local you just need to monitor a few explicit network calls to be sure. | | |
| ▲ | pmontra 4 hours ago | parent [-] | | In a realistic and useful scenario, how would you approve or deny network calls made by a LLM? |
|
|
|