| ▲ | fpgaminer 3 hours ago | |
> Is every new thing not just combinations of existing things? If all ideas are recombinations of old ideas, where did the first ideas come from? And wouldn't the complexity of ideas be thus limited to the combined complexity of the "seed" ideas? I think it's more fair to say that recombining ideas is an efficient way to quickly explore a very complex, hyperdimensional space. In some cases that's enough to land on new, useful ideas, but not always. A) the new, useful idea might be _near_ the area you land on, but not exactly at. B) there are whole classes of new, useful ideas that cannot be reached by any combination of existing "idea vectors". Therefore there is still the necessity to explore the space manually, even if you're using these idea vectors to give you starting points to explore from. All this to say: Every new thing is a combination of existing things + sweat and tears. The question everyone has is, are current LLMs capable of the latter component. Historically the answer is _no_, because they had no real capacity to iterate. Without iteration you cannot explore. But now that they can reliably iterate, and to some extent plan their iterations, we are starting to see their first meaningful, fledgling attempts at the "sweat and tears" part of building new ideas. | ||
| ▲ | drdeca an hour ago | parent | next [-] | |
Well, what exactly an “idea” is might be a little unclear, but I don’t think it clear that the complexity of ideas that result from combining previously obtained ideas would be bounded by the complexity of the ideas they are combinations of. Any countable group is a quotient of a subgroup of the free group on two elements, iirc. There’s also the concept of “semantic primes”. Here is a not-quite correct oversimplification of the idea: Suppose you go through the dictionary and one word at a time pick a word whose definition includes only other words that are still in the dictionary, and removing them. You can also rephrase definitions before doing this, as long as it keeps the same meaning. Suppose you do this with the goal of leaving as few words in it as you can. In the end, you should have a small cluster of a bit over 100 words, in terms of which all the other words you removed can be indirectly defined. (The idea of semantic primes also says that there is such a minimal set which translates essentially directly* between different natural languages.) I don’t think that says that words for complicated ideas aren’t like, more complicated? | ||
| ▲ | red75prime 2 hours ago | parent | prev [-] | |
"Sweat and tears" -> exploration and the training signal for reinforcement learning. | ||