Remix.run Logo
KaiserPro 12 hours ago

When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

throwaway2037 15 minutes ago | parent | next [-]

    > now run by a teenage data labeller
Do you mean Alexandr Wang? Wiki says he is 28 years old. I don't understand.
ACCount37 11 hours ago | parent | prev | next [-]

As if that makes any difference to cybercriminals.

If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.

Milderbole 11 hours ago | parent | prev | next [-]

If the article is not just marketing fluff, I assume a bad actor would select Claude not because it’s good at writing attacks, instead a bad actor code would choose it because Western orgs chose Claude. Sonnet is usually the go-to on most coding copilot because the model was trained on good range of data distribution reflecting western coding patterns. If you want to find a gap or write a vulnerability, use the same tool that has ingested patterns that wrote code of the systems you’re trying to break. Or use Claude to write a phishing attack because then output is more likely similar to what our eyes would expect.

Aeolun 10 hours ago | parent | next [-]

Why would someone in China not select Claude? If the people at Claude not notice then it’s a pure win. If they do notice, what are they going to do, arrest you? The worst thing they can do is block your account, then you have to make a new one with a newly issued false credit card. Whoopie doo.

criemen 10 hours ago | parent [-]

> Why would someone in China not select Claude?

Because Anthropic doesn't provide services in China? See https://www.anthropic.com/supported-countries

dboreham 9 hours ago | parent | next [-]

Can confirm Claude doesn't even work in Hong Kong. That said I fired up my VPN and...then it did work.

10 hours ago | parent | prev | next [-]
[deleted]
xadhominemx 3 hours ago | parent | prev [-]

Not really a relevant issue or concern for a nation state backed hack…

KaiserPro 9 hours ago | parent | prev [-]

What your describing would be plausible if this was about exploiting claude to get access to organisations that use it.

The gist of the anthropic thing is that "claude made, deployed and coordinated" a standard malware attack. Which is a _very_ different task.

Side note, most code assistants are trained on broadly similar coding datasets (ie github scrapes.)

maddmann 11 hours ago | parent | prev | next [-]

Good old Meta and its teenage data labeler

heresie-dabord 11 hours ago | parent [-]

I propose a project that we name Blarrble, it will generate text.

We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.

Can we make (m|b|tr)illions and solve teenage unemployment before the Blarrble bubble bursts?

iterateoften 10 hours ago | parent | prev | next [-]

> you're API is tied to a bankaccount,

There are a lot of middlemen like open router who gladly accept crypto.

mrtesthah 2 hours ago | parent [-]

Can you show me exactly how to pay for open router with monero? Because it doesn’t seem possible.

Tiberium 23 minutes ago | parent [-]

There are tons of websites that will happily swap Monero for Ethereum, and then you can use it to pay. Most of those websites never actually do KYC or proper fund verification, unless you're operating on huge amounts or is suspicious in some other way.

jgalt212 11 hours ago | parent | prev | next [-]

> now run by a teenage data labeller

sick burn

y-curious 10 hours ago | parent | next [-]

I don’t know anything about him, but if he is running a department at Meta, he as at the very least a political genius and a teenage data labeller

tim333 an hour ago | parent | next [-]

I was just watching the Y Combinator interview with Alexandr Wang who I guess may be being referred to https://youtu.be/5noIKN8t69U

The teenage data labeler thing was a bit of an exaggeration. He did found scale.ai at nineteen which does data labeling amongst other things.

tomrod 9 hours ago | parent | prev | next [-]

It's a simple heuristic that will save a lot of time: something that seems too good to be true usually is.

antonvs 7 hours ago | parent | prev | next [-]

Presumably this is all referring to Alexander Wang, who's 28 now. The data-labeling company he co-founded, Scale AI, was acquired by Meta at a valuation of nearly $30 billion.

But I suppose the criticism is that he doesn't have deep AI model research credentials. Which raises the age-old question of how much technical expertise is really needed in executive management.

KaiserPro 4 hours ago | parent | next [-]

> how much technical expertise is really needed in executive management.

For running an AI lab? a lot. Put it this way, part of the reason that Meta has squandered its lead is because it decided to fill it's genAI dept (pre wang) with non-ML people.

Now thats fine, if they had decent product design and clear road map as to the products they want to release.

but no, they are just learning ML as they go, coming up with bullshit ideas as they go and seeing what sticks.

But, where it gets worse, is they take the FAIR team and pass them around like a soiled blanket: "You're a team that is pushing the boundaries in research, but also you need stop doing that and work on this chatbot that pretends to be a black gay single mother"

All the while you have a sister department, RL-L run by Abrash, who lets you actually do real research.

Which means most of FAIR have fucked off to somewhere less stressful, and more concentrated on actually doing research, rather than posting about how you're doing research.

Wangs misteps are numerous, the biggest one is re-platforming the training system. Thats a two year project right there, for no gain. It also force forks you from the rest of the ML teams. Given how long it took to move to MAST from fblearner, its going be a long slog. And thats before you tackle increasing GPU efficiency.

tomrod 3 hours ago | parent | prev | next [-]

> Which raises the age-old question of how much technical expertise is really needed in executive management.

For whomever you choose to set as the core decision maker, you get out whatever their expertise is with minor impact by their guides.

Scaling a business is a skill set. It's not a skill set that captures or expands the frontier of AI, so it's clearly in the realm to label the gentleman's expensive buyout is a product development play instead of a technology play.

NewsaHackO 5 hours ago | parent | prev [-]

Hopefully he isn’t referring to Alex Wang, as it would invalidate anything else he said in his comment

lijok 8 hours ago | parent | prev [-]

They hired a teenager to run one of their departments and thought that meant the teenager was smart instead of realizing that Meta’s department heads aren’t

antonvs 4 hours ago | parent [-]

> They hired a teenager to run one of their departments

Except they didn’t. The person in question was 28 when they hired him.

He was a teenager when he cofounded the company that was acquired for thirty billion dollars. But the taste of those really sour grapes must be hard to deal with.

KaiserPro 3 hours ago | parent | next [-]

> The person in question was 28 when they hired him.

Comic hyperbole darling. I know that's hard to understand, especially when you're one of the start up elect, who still believes.

But, FAIR is dead, meta have a huge brain drain, and Alex only has hardware and money to fix it. Worse for him, is he's surrounded by poisonous empire builders, and/or much more effective courtesans who can play zuck much more effectively than him.

Wang needs Zuck, and Zuck needs results. The problem is, people keep on giving zuck ideas, like robotics, and world models and AI sex bots.

Wang has to somehow keep up productivity, and integrate into meta's wider culture. Oh, and if he wants any decent amount of that 30billion, he's gotta stick out for 4 years.

I did my time and got my four years of RSUs from the buyout. my boss didn't neither did the CTO or about 2/3rds of the team. Meta will eat you, and I don't envy him.

NewsaHackO 2 hours ago | parent | prev [-]

I could not imagine being as salty as the original poster seems to be about Alex Wang. To hold that amount of hate for a superior that is more successful than you can’t be good for the soul

lijok an hour ago | parent [-]

You’re taking this a tad too seriously

williadc 7 hours ago | parent | prev [-]

Alexandr Wang is 28 years old, the same age as Mark Zuckerberg was when Facebook IPO'ed,

smrtinsert 6 hours ago | parent [-]

A business where the distinguishing factor was exclusivity not technical excellence so it tracks.

semiinfinitely 2 hours ago | parent | prev | next [-]

meta was never "world leading"

robrenaud 14 minutes ago | parent [-]

pytorch

cadamsdotcom an hour ago | parent | prev [-]

I think the high order bit here is you were working with models from previous generations.

In other words, since the latest generation of models have greater capabilities the story might be very different today.

Tiberium 20 minutes ago | parent [-]

Not sure why you're being downvoted, your observation is very correct here, newer models are indeed a lot better, and even at the time that foundational model (even if fine tuned) might've been worse than a commercial model from OpenAI/Anthropic.