| ▲ | altcognito 3 hours ago | ||||||||||||||||
Billions upon billions going to these companies. 25k reward from a selected group of people if you help us determine whether or not someone can use our tool to generate weapons of mass destruction. | |||||||||||||||||
| ▲ | Schlagbohrer 2 hours ago | parent | next [-] | ||||||||||||||||
It's worse than that, for partial successes they encourage people to submit the attempt but reserve the right to not pay anything (they may, at their discretion, give a partial reward if they feel like it). | |||||||||||||||||
| |||||||||||||||||
| ▲ | chromacity 35 minutes ago | parent | prev | next [-] | ||||||||||||||||
Because it can't and it's a publicity stunt. It achieves three goals: 1) Underscores to the general public that the models are amazingly powerful and if you're not using them, your competitors will out-innovate you, 2) Sends the message to regulators that they don't need to do anything because the companies are diligent to prevent harm, 3) Sends the message to regulators that they sure should be regulating "open-source" models, because these hippies are not doing rigorous safety testing. Both Anthropic and OpenAI have been playing that game for years. | |||||||||||||||||
| |||||||||||||||||
| ▲ | 2ndorderthought an hour ago | parent | prev | next [-] | ||||||||||||||||
Though it could be a Honeypot they are probably hoping to train on all the ways someone might try to do this. Or maybe funds are really low and they need a smoke screen for a really bad actor to go in and try to do it for real. | |||||||||||||||||
| ▲ | cbg0 3 hours ago | parent | prev [-] | ||||||||||||||||
They're probably expecting that it can be done without too much effort so they just want to see all the unique ways people are doing it. | |||||||||||||||||
| |||||||||||||||||