▲ | dr_dshiv 7 days ago | ||||||||||||||||||||||||||||||||||||||||||||||
Good old fashioned AI, amirite | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | mindcrime 7 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Well, to the extent that people equate GOFAI with purely symbolic / logic-based processing, then no, not for my money anyway. I think it's possible to construct systems that use elements of symbolic processing along with sub-symbolic approaches and get useful results. I think of it as (although this is something of an over-simplification) taking symbolic reasoning, relaxing some of the constraints that go along with the guarantees that method makes out the outputs, and accepting a (hopefully only slightly) less desirable output. OR, think about flipping the whole thing around, get an output from, say, an LLM where there might be hallucination(s), and then use a symbolic reasoning system to post-process the output to ensure veracity before sending it to the user. Amazon has done some work along those lines, for example. https://aws.amazon.com/blogs/machine-learning/reducing-hallu... Anyway this is all somewhat speculative, and I don't want to overstate the "weight" of anything I seem to be claiming here. This is just the direction my interests and inclinations have taken me in. | |||||||||||||||||||||||||||||||||||||||||||||||
|