Remix.run Logo
gmueckl 3 days ago

A human can understand and process arguments outside the bounded input domain of automated clssification systems.

sidewndr46 3 days ago | parent [-]

They can, but what incentive would they have to do so? They are probably measured off the number of cases they close. The fastest way to close them would be to agree with the conclusions of the algorithm

gmueckl 2 days ago | parent [-]

My take on this is that telling a human reviewer to stick to a decision made by an automated process is actually against the law: some independence of the reviewer is implicitly required by the spirit of the regulation.

Naturally IANAL and such a claim would have to be tested in court if it was an actually viable argument in the first place.

sidewndr46 2 days ago | parent [-]

Almost certainly it is. Especially if done in writing. But it's pretty easy to do in practice. First you do it verbally, by suggesting the system rarely makes mistakes. It's the role of the employee to double check the system's work, not to second guess it obviously. Secondly just layoff or transfer anyone that doesn't side with the algorithm most of the time.