Remix.run Logo
AnthonyMouse 5 hours ago

> Now the that someone simply needs to verify that the filtered documents are relevsnt.

Now someone simply needs to verify that the filtered in documents are relevant and the filtered out documents are not relevant. But wait, that was the original problem.

johnnyanmac 4 hours ago | parent [-]

If they are trusting AI to replace labor, they should trust AI to be accountable for bad filters. What happens when a human misses over a document or 2?

AnthonyMouse 4 hours ago | parent [-]

> If they are trusting AI to replace labor, they should trust AI to be accountable for bad filters.

Surely all of the AI hype is true and there are no hypocrites in Corporate America.

> What happens when a human misses over a document or 2?

If they were obligated to produce it and don't they can get into some pretty bad trouble with the court. If they hand over something sensitive they weren't required to, they could potentially lose billions of dollars by handing trade secrets to a competitor, or get sued by someone else for violating an NDA etc.

johnnyanmac 4 hours ago | parent [-]

>Surely all of the AI hype is true and there are no hypocrites in Corporate America.

Worst case they are right and now we have more efficient processing. Best case, bungling up some high profile cases accelerates us towards proper regulation when a judge tires of AI scapegoats.

I don't see a big downside here.

>If they were obligated to produce it and don't they can get into some pretty bad trouble with the court.

Okay, seems easy enough to map to AI. Just a matter of who we hold accountable for it. The prompter, the company at large, or the AI provider.

AnthonyMouse 2 hours ago | parent [-]

> I don't see a big downside here.

There is an obvious downside for them which is why they don't do it. To make them do it the judge would have to order them to use AI to do it faster, which would make it a lot less reasonable for the judge to get mad at them when the AI messes it up.

> Just a matter of who we hold accountable for it. The prompter, the company at large, or the AI provider.

You're just asking who you want to have refuse to do it because everybody knows it wouldn't actually get it perfect and then the person you want to punish when it goes wrong is the person who is going to say no.

johnnyanmac 2 hours ago | parent [-]

>There is an obvious downside for them which is why they don't do it.

Well yes. This is all academic. I already said in the first comment that they have a financial incentive to stall the courts.

>You're just asking who you want to have refuse to do it....

I just want efficiency. It's a shame we can't have that when it comes to things that might help the people and hurt billionaires.

So what's really wrong with what I'm asking?

AnthonyMouse an hour ago | parent [-]

> I already said in the first comment that they have a financial incentive to stall the courts.

They have a financial incentive to not be found in contempt of court. And another financial incentive to not disclose sensitive information they're not supposed to disclose.

When false positives and false negatives are both very expensive, what's left is a resource-intensive slog to make sure everything is on the right side of the line. "Use the new thing that sacrifices accuracy for haste" is not a solution.

> I just want efficiency.

Asking for efficiency from the court system is like asking for speed from geology. That's not typically where you find that and if it is you're probably about to have a bad time.

The way you actually get efficiency is by having a larger number of smaller companies, so they're not massive vertically integrated conglomerates that you need something the size and speed of the US government to hold them in check.