▲ | adastra22 6 days ago | |
FYI this posts comes off as incredibly pretentious. You think we haven’t read the same philosophy? This isn’t about epistemology. We are talking about psychology. What does your brain do when we “reason things out”? Not “can we know anything anyway?” Or “what is the correlation between the map and the territory?” Nor anything like that. Just “what is your brain doing when you think you are reasoning?” And “is what an LLM does comparable? Philosophy doesn’t have answers for questions of applied psychology. | ||
▲ | viccis 4 days ago | parent [-] | |
>FYI this posts comes off as incredibly pretentious. You think we haven’t read the same philosophy? Rigorous language often comes across as pretentious to any layperson, especially when it concerns subjects like philosophy. I don't know what philosophy you've read, but, based on my experience, it's a pretty safe assumption that most AI practitioners do not own a well creased copy of Critique of Pure Reason. >This isn’t about epistemology. We are talking about psychology. What does your brain do when we “reason things out”? The only way to compare what our brain does (psychologically or neurologically) to what LLMs or other models do when we "reason things out" is via epistemology, which is to say "how is it possible to reason that out". Asking how our brains do it psychologically or neurologically is really not relevant, as LLMs are not designed the same as our brains. >Philosophy doesn’t have answers for questions of applied psychology. I think that expecting philosophy to have any "answers" for topics that include metaphysical questions is unreasonable, yes. But to even bring up "psychology" when discussing generative probability models is unhelpful anthropomorphization. |