| ▲ | JumpCrisscross 4 hours ago | ||||||||||||||||||||||||||||||||||||||||
> Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies Unlikely in America or China. This is not a game either can singularly control, and locking down the R&D means conceding momentum to the party that doesn't. Which means use restrictions will be contained to countries satisfied with playing second fiddle. Instead, I suspect we'll see momentum towards running software on publisher-controlled servers so the source code can be secured through obscurity. It isn't perfect. But it might be good enough to get us through this transition. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | ls612 4 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||
If America just banned all chinese models that would wipe out most of the open weights landscape in AI, especially anything close to the frontier. I could easily see that happening if a Mythos tier model comes out of a Chinese lab in early 2027. It doesn't meaningfully change the research competition between OAI/Anthropic/Google/SpaceX but it does pad all of their pockets by removing cheap competition and it gives the government far greater control over AI usage de facto. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||