| ▲ | kybernetikos 13 hours ago |
| They are a mathematical function that has been found during a search that was designed to find functions that produce the same output as conscious beings writing meaningful works. |
|
| ▲ | fyredge 13 hours ago | parent [-] |
| Agreed, and to that point, the way to produce such outputs is to absorb a large corpus of words and find the most likely prediction that mimics the written language. By virtue of the sheer amount of text it learns from, would you say that the output tends to find the average response based on the text provided? After all, "over fitting" is a well known concept that is avoided as a principle by ML researchers. What else could be the case? |
| |
| ▲ | kybernetikos 7 hours ago | parent [-] | | I think 'average' is creating a bad intuition here. In order to accurately predict the next word in a human generated text, you need a model of the big picture of what is being said. You need a model of what is real and what is not real. You need a model of what it's like to be a human. The number of possible texts is enormous which means that it's not like you can say "There are lots of texts that start with the same 50 tokens, I'll average the 51st token that appears in them to work out what I should generate". The subspace of human generated texts in the space of all possible texts is extremely sparse, and 'averaging' isn't the best way to think of the process. |
|