Remix.run Logo
dylan604 7 hours ago

for systems that learn in real-time, is there a way for the humans to know/understand how/why the system came to the conclusion it has? there are examples of where humans ran experiments that came to a conclusion for the wrong reasons. if an AI system thinks it knows the answer for the wrong reason, wouldn't that then poison its reasoning later as well? can an AI system learn its reasoning is wrong and then update it when provided better evidence? that seems to be something a vast majority of humans cannot do.