| ▲ | jqpabc123 2 days ago | |
Computer code is highly deterministic. This allows it to be tested fairly easily. Unfortunately, code productionn is not the only use-case for AI. Most things in life are not as well defined --- a matter of judgment. AI is being applied in lots of real world cases where judgment is required to interpret results. For example, "Does this patient have cancer". And it is fairly easy to show that AI's judgment can be highly suspect. There are often legal implications for poor judgment --- i.e. medical malpractice. Maybe you can argue that this is a mis-application of AI --- and I don't necessarily disagree --- but the point is, once the legal system makes this abundantly clear, the practical business case for AI is going to be severely reduced if humans still have to vet the results in every case. | ||
| ▲ | hnfong a day ago | parent [-] | |
Why do you think AI is inherently worse than humans in judging whether a patient has cancer, assuming they are given the same information as the human doctor? Is there some fundamental assumption that makes AI worse, or are you simply projecting your personal belief (trust) in human doctors? (Note that given the speed of progress of AI and that we're talking about what the law ought to be, not what it was in the past, the past performance of AI on cancer cases do not have much relevance unless a fundamental issue with AI is identified) Note that whether a person has cancer is generally well-defined, although it may not be obvious at first. If you just let the patient go untreated, you'll know the answer quite definitely in a couple years. | ||