| ▲ | mrjay42 2 hours ago | |
I'm not especially defending AI, but isn't this information like that one time a professor changed the content on Wikipedia to play a big 'gotcha' on his students? Instead of proving that Wikipedia is "bad", that professor didn't realize he proved that Wikipedia is working as intended: if you write something wrong in Wikipedia, over a certain period of time (yes, it can be long, I know), it will be corrected. About this article in Nature, if you feed AI incorrect information, it's gonna spit it back at you. When you think about it, when did we say that AI was self correcting? In a broader logic, imagine we teach kids something false, as an experiment of course. And then we wait a little bit, and we watch some years later how much of this people still repeat the false information they were taught. And then we'd write a paper to say "oh look at those people they're dumb", wouldn't that be a little unfair? even unscientific? | ||