| ▲ | jmkni 11 hours ago |
| That whole article felt like "Claude is so good Chinese hackers are using it for espionage" marketing fluff tbh |
|
| ▲ | ndiddy 11 hours ago | parent | next [-] |
| Reminds me of how when the Playstation 2 came out, Sony started planting articles about how it was so powerful that the Iraqi government was buying thousands of them to turn into a supercomputer (including unnamed military officials bringing up Sony marketing points). https://www.wnd.com/2000/12/7640/ |
| |
| ▲ | y-curious 10 hours ago | parent | next [-] | | Is there any compelling evidence that this was marketing done by Sony? Yes, the sniff test does not pass for me about the government officials advertising the device, but this Reddit thread[1] makes the whole story seem plausible. America and Japan really did impose restrictions on shipping to Iraq and people did eventually chain PS3s together for cheap computing. 1: https://www.reddit.com/r/AskHistorians/comments/l3hp2i/did_s... | | |
| ▲ | Keyframe 9 hours ago | parent [-] | | Apple used similar marketing tactics with G4 since it was "so powerful" it was under restricted export control, where in reality it was an outdated regulation that needed an update. |
| |
| ▲ | jmkni 10 hours ago | parent | prev | next [-] | | Ironically the US millitary actually did this with the Playstation 3 | | | |
| ▲ | bongodongobob 7 hours ago | parent | prev [-] | | But it was that good for the price point. And you could run Linux on it. That was the Beowulf cluster era. Lots of universities were doing that. | | |
| ▲ | duskwuff an hour ago | parent [-] | | You may be mixing up the PS2 and PS3. The PS3 found some marginal use in computing clusters; the PS2 did not. | | |
| ▲ | bongodongobob 32 minutes ago | parent [-] | | A quick google will show you that it was. I remember because I was in college at the time and that's how I learned what a Beowulf cluster was. Maybe PS3 was more successful or more popular, but there were definitely PS2 clusters. |
|
|
|
|
| ▲ | mnky9800n 11 hours ago | parent | prev | next [-] |
| I also would believe that they fell into the trap of being so good at making Claude they now think they are good at everything and so why hire an infosec person we can write our own report! And that’s why their report violates so many norms because they didn’t know them. |
| |
| ▲ | neves 4 hours ago | parent [-] | | They don't need to hire anyone. They just prompted Claude to write for them. :-) |
|
|
| ▲ | neves 4 hours ago | parent | prev | next [-] |
| Leaning in the "China Menace" will also give you points with the USA Gov. I can see that they can detect an attack using their tools, but tracing it to an organization "sponsored" by the Chinese government looks like bullshit marketing. How they did it? A Google search? I have the Chinese Gov in higher grounds. They wouldn't be easily detected by a startup without experience in infosec. |
|
| ▲ | skybrian 6 hours ago | parent | prev [-] |
| If we’re sharing vibes, “our product is dangerous” seems like an unusual sales tactic outside the defense industry. I’m doubtful that’s how it works? Meanwhile, another reason to make a press release is that you’ll be criticized for the coverup if you don’t. Also, it puts other companies on notice that maybe they should look for this? |
| |
| ▲ | scrps 3 hours ago | parent | next [-] | | I think it might be a "our product IS dangerous but look we are on top of it!" kind of deal. Still leaves a funny taste either way. | |
| ▲ | Barrin92 an hour ago | parent | prev | next [-] | | >unusual sales tactic outside the defense industry. I’m doubtful that’s how it works? given the valuation and money these companies burn through marketing wise they basically need to play by the same logic as defense companies. They're all running on "we're reinventing the world and building god" to justify their spending, "here's a chatbot (like 20 other ones) that going to make you marginally more productive" isn't really going to cut it at this point, they're in too deep | |
| ▲ | mrtesthah 2 hours ago | parent | prev [-] | | The bulk of OpenAI and Anthropic’s statements about doomsday AGI and AI safety in general also present the company as sole ethical gatekeeper of the technology, whom we must trust and protect lest its unscrupulous rivals win the AI race. So this article is very much in line with that marketing strategy. |
|