Remix.run Logo
viccis 6 days ago

>As far as I can tell “generating context” is exactly what human reasoning is too.

This was the view of Hume (humans as bundles of experience who just collect information and make educated guesses for everything). Unfortunately, it leads to philosophical skepticism, in which you can't ground any knowledge absolutely, as it's all just justified by some knowledge you got from someone else, which also came from someone else, etc., and eventually you can't actually justify any knowledge that isn't directly a result of experience (the concept of "every effect has a cause" is a classic example).

There have been plenty of epistemological responses to this viewpoint, with Kant's view, of humans doing a mix of "gathering context" (using our senses) but also applying universal categorical reasoning to schematize and understand / reason from the objects we sense, being the most well known.

I feel like anyone talking about the epistemology of AI should spend some time reading the basics of all of the thought from the greatest thinkers on the subject in history...

js8 6 days ago | parent | next [-]

> I feel like anyone talking about the epistemology of AI should spend some time reading the basics

I agree, I think the problem with AI is we don't know or haven't formalized enough what epistemology should AGI systems have. Instead, people are looking for shortcuts, feeding huge amount of data into the models, hoping it will self-organize into something that humans actually want.

viccis 6 days ago | parent [-]

It's partly driven by a hope that if you can model language well enough, you'll then have a model of knowledge. Logical positivism tried that with logical systems, which are much more precise languages of expressing facts, and it still fell on its face.

adastra22 6 days ago | parent | prev [-]

FYI this posts comes off as incredibly pretentious. You think we haven’t read the same philosophy?

This isn’t about epistemology. We are talking about psychology. What does your brain do when we “reason things out”? Not “can we know anything anyway?” Or “what is the correlation between the map and the territory?” Nor anything like that. Just “what is your brain doing when you think you are reasoning?” And “is what an LLM does comparable?

Philosophy doesn’t have answers for questions of applied psychology.

viccis 4 days ago | parent [-]

>FYI this posts comes off as incredibly pretentious. You think we haven’t read the same philosophy?

Rigorous language often comes across as pretentious to any layperson, especially when it concerns subjects like philosophy. I don't know what philosophy you've read, but, based on my experience, it's a pretty safe assumption that most AI practitioners do not own a well creased copy of Critique of Pure Reason.

>This isn’t about epistemology. We are talking about psychology. What does your brain do when we “reason things out”?

The only way to compare what our brain does (psychologically or neurologically) to what LLMs or other models do when we "reason things out" is via epistemology, which is to say "how is it possible to reason that out". Asking how our brains do it psychologically or neurologically is really not relevant, as LLMs are not designed the same as our brains.

>Philosophy doesn’t have answers for questions of applied psychology.

I think that expecting philosophy to have any "answers" for topics that include metaphysical questions is unreasonable, yes. But to even bring up "psychology" when discussing generative probability models is unhelpful anthropomorphization.