▲ | miningape a day ago | ||||||||||||||||
What? What does philosophy have to do with anything? If you cannot agree on the correct interpretation, nor output, what stops an LLM from solving the wrong problem? what stops an LLM from "compiling" the incorrect source code? What even makes it possible for us to solve a problem? If I ask an LLM to add a column to a table and it drops the table it's a critical failure - not something to be reinterpreted as a "new truth". Philosophical arguments are fine when it comes to loose concepts like human language (interpretive domains). On the other hand computer languages are precise and not open to interpretation (formal domains) - so philosophical arguments cannot be applied to them (only applied to the human interpretation of code). It's like how mathematical "language" (again a formal domain) describes precise rulesets (axioms) and every "fact" (theorem) is derived from them. You cannot philosophise your way out of the axioms being the base units of expression, you cannot philosophise a theorem into falsehood (instead you must show through precise mathematical language why a theorem breaks the axioms). This is exactly why programming, like mathematics, is a domain where correctness is objective and not something that can be waved away with philosophical reinterpretation. (This is also why the philosophy department is kept far away from the mathematics department) | |||||||||||||||||
▲ | dworks a day ago | parent [-] | ||||||||||||||||
Looks like you misunderstood my comment. My point is that both input and output is too fuzzy for an LLM to be reliable in an automated system. "Truth is one of the central subjects in philosophy." - https://plato.stanford.edu/entries/truth/ | |||||||||||||||||
|