Remix.run Logo
pavel_lishin 2 days ago

> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.

I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.

Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.

altruios 6 hours ago | parent | next [-]

I assume if they get fired by the AI during the experiment they are still paid to sit at home. It would not invalidate the experiment.

pessimizer 6 hours ago | parent [-]

Why do you assume that?

notahacker 4 hours ago | parent | next [-]

it's about the only way of reconciling experimental validity (if the AI can't "fire" staff and remove them from business operations and their P&L account in situations when it would be legal and normal to do so, is it really running a business?) and not having the massive ethical issue of people being arbitrarily fired because a computer glitched. Whether that's what they actually do is tbc.

sodality2 5 hours ago | parent | prev [-]

[dead]

anon84873628 5 hours ago | parent | prev | next [-]

The AI is not really the CEO in the first place. It is not signing contracts (at least not with its own name). It is fundamentally still an automated tool reporting to the real human operators, who are doing more of the actual corporate legal tasks than portrayed in the article.

yieldcrv 5 hours ago | parent [-]

People can delegate

john_strinlai 3 hours ago | parent [-]

sure. but in this case, having the ai delegate to humans for any important task sort of undermines the entire premise.

jayd16 7 hours ago | parent | prev | next [-]

You can still wear eye protection during the safety test...

I don't think we need to have real human risk to get results from the experiment.

fl4ppyb3ngt 5 hours ago | parent [-]

[dead]

jaxefayo 6 hours ago | parent | prev | next [-]

The article mentions:

“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”

which was refreshing to read.

evanelias 4 hours ago | parent | next [-]

Literally the two sentences immediately following that quote are "For now. As we continue down this path, however, humans will not be able to stay in the loop and such guarantees will be intractable."

Personally I find the entire tone of the article to be creepy and disturbing.

hamdingers 6 hours ago | parent | prev [-]

I take that to mean "we won't let the AI refuse to pay them or otherwise break employment law" not that they could never be fired.

HWR_14 5 hours ago | parent [-]

I read that as "it's not worth the negative PR of being associated with AI firing minimum wage employees" compared to just paying them for a year or two.

ceejayoz 7 hours ago | parent | prev | next [-]

They could, in theory, have contracts that say the AI can't fire them.

compiler-guy 7 hours ago | parent | next [-]

It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.

I doubt the experiment is set up that way, but that would be an ethical way to do it.

wil421 7 hours ago | parent | prev [-]

There’s no way they are putting that into a contract. HRs are already using it to fire people.

ceejayoz 6 hours ago | parent [-]

"This specific AI can't fire anyone without human review, because it's experimental" is something you could easily add.

joe_the_user 5 hours ago | parent | prev | next [-]

At this point, legally I don't think an AI can hold a contract with a person and so I don't think an AI could hire human and so they couldn't fire a person.

That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.

But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".

And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.

fl4ppyb3ngt 5 hours ago | parent | prev [-]

[flagged]