Remix.run Logo
derektank a day ago

A problem yes, but I think GP is correct in comparing the problem to that of human workers. The solution there has historically been RBAC and risk management. I don’t see any conceptual difference between a human and an automated system on this front

nkrisc a day ago | parent | next [-]

> I don’t see any conceptual difference between a human and an automated system on this front

If an employee of a third party contractor did something like that, I think you’d have better chances of recovering damages from them as opposed to from OpenAI for something one of its LLMs does on your behalf.

There are probably other practical differences.

lelandfe a day ago | parent | prev | next [-]

We need to take a page from baseball and examine Hacks Above Replacement

conradev a day ago | parent | prev | next [-]

If anything, the limit of RBAC is ultimately the human attention required to provision, maintain and monitor the systems. Endpoint security monitoring is only as sophisticated as the algorithm that does the monitoring.

I'm actually most worried about the ease of deploying RBAC with more sophisticated monitoring to control humans but for goals that I would not agree with. Imagine every single thing you do on your computer being checked by a model to make sure it is "safe" or "allowed".

moron4hire a day ago | parent | prev | next [-]

A human worker can be coached, fired, terminated, sued, any number of things can be done to a human worker for making such a mistake or willful attack. But AI companies, as we have seen with almost every issue so far, will be given a pass while Sam Altman sycophants cheer and talk about how it'll "get better" in the future, just trust them.

SoleilAbsolu a day ago | parent | next [-]

Yeah, if I hung a sign on my door saying "Answers generated by this person may be incorrect" my boss and HR would quickly put me on a PIP, or worse. If a physical product didn't do what it claimed to do, it would be recalled and the maker would get sued. Why does AI get a pass just pooping out plausible but incorrect, and sometimes very dangerous, answers?

philipallstar a day ago | parent [-]

> Yeah, if I hung a sign on my door saying "Answers generated by this person may be incorrect" my boss and HR would quickly put me on a PIP, or worse

I also have never written a bug, fellow alien.

premiumLootBox a day ago | parent [-]

I do not fear the employee who makes a mistake, I fear the AI that will make hundreds of mistakes in thousands of companies, endlessly.

philipallstar 14 hours ago | parent [-]

As employees also do across thousands of companies.

anthem2025 a day ago | parent | prev [-]

[dead]

stonogo a day ago | parent | prev [-]

The difference is 'accountability' and it always will be.