| ▲ | pton_xd 5 days ago |
| AI companies: Agentic AI has been weaponized. AI models are now being used to perform sophisticated cyberattacks, not just advise on how to carry them out. We need regulation to mitigate these risks. The same AI companies: here's a way to give AI full executable access to your personal data, enjoy! |
|
| ▲ | akomtu 5 days ago | parent | next [-] |
| Today it's full access to your laptop, a decade from now it will be full access to your brain. Isn't it the goal of tech like neuralink? |
| |
|
| ▲ | ysofunny 5 days ago | parent | prev [-] |
| what are you saying, this has an early internet vibe! time to explore. isn't this HACKER news? get hacking. ffs |
| |
| ▲ | rafram 5 days ago | parent | next [-] | | The early internet was naive. It turned out fine because people mostly (mostly!) behaved. We don’t live in that world anymore; in 2025, “early internet vibes” are just fantasies. Lots of motivated attackers are actively working to find vulnerabilities in AI systems, and this is a gift to them. | |
| ▲ | keyle 5 days ago | parent | prev | next [-] | | In the open source yes. Not in the monopolies. We are living the wrong book. | |
| ▲ | pton_xd 5 days ago | parent | prev [-] | | I actually agree, I think it's exciting technology and letting it loose is the best way to learn its limits. My comment was really to point out the hypocrisy of OpenAI / Anthropic / et al in pushing for regulation. Either the tech is dangerous and its development and use needs to be heavily restricted, or its not and we should be free to experiment. You cant have it both ways. These companies seem like they're just taking the position of whichever stance benefits them the most on any given day. Or maybe I'm not smart enough to really see the bigger picture here. Basically, I think these companies calling for regulation are full of BS. And their actions prove it. | | |
| ▲ | ACCount37 4 days ago | parent [-] | | This generation of AI systems isn't "break the world" dangerous. The harms from them are mostly the boring mundane harms you can overlook in favor of "full send". But the performance and capabilities of AI systems only ever goes up. Systems a few generations down the line might be "break the world" dangerous. And you really don't want to learn that after you "full send" release them with no safety testing, the way you did the 10 systems before it. |
|
|