▲ | stale2002 16 hours ago | |
I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago. The same is not true about LLMs. No, LLMs aren't going to be stopped when anyone with a computer from the last couple years is able to run them on their desktop. (There are smaller LLMs that can be even run on your mobile phone!). The laws required to stop this would be draconian. It would require full government monitoring of all computers. And any country or group that "defects" by allowing people to use LLMs, would gain a massive benefit. | ||
▲ | ben_w 12 hours ago | parent | next [-] | |
> I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago. You make be surprised to learn that you can make a chemical weapon on your gaming graphics card from 5 years ago. It's just that it will void the warranty well before you have a meaningful quantity of chlorine gas from the salt water you dunked it in while switched on. | ||
▲ | TeMPOraL 16 hours ago | parent | prev [-] | |
Yup. The government of the world could shut down all LLM providers tomorrow, and it wouldn't change a thing - LLMs fundamentally are programs, not a service. There are models lagging 6-12 months behind current SOTA, that you can just download and run on your own GPU today; most research is in the open too, so nothing stops people from continuing it and training new models locally.z At this point, AI research is not possible to stop without killing humanity as technological civilization - and it's not even possible to slow it down much, short of taking extreme measures Eliezer Yudkowsky was talking about years ago: yes, it would literally take a multinational treaty on stopping advanced compute, and aggressively enforcing it - including (but not limited to) by preemptively bombing rogue data centers as they pop up around the world. |