| ▲ | Terretta 14 hours ago | |||||||
> we do both text and voice (roughly 70% of data collection is typed, 30% spoken). Partly this is to make sure the model is learning to decode semantic intent (rather than just planned motor movements) Both of these modes are incredibly slow thinking. Conciously shifting from thinking in concepts to thinking in words is like slamming on brakes for a school zone on an autobahn. I've gathered most people think in words they can "hear in their head", most people can "picture a red triangle" and literally see one, and so on. Many folks who are multi-lingual say they think in a language, or dream in that language, and know which one it is. Meanwhile, some people think less verbally or less visually, perhaps not verbally or visually at all, and there is no language (words). A blog post shared here last month discussed a person trying to access this conceptual mode, which he thinks is like "shower thoughts" or physicists solving things in their heads while staring into space, except "under executive function". He described most of his thoughts as words he can hear in his head, with these concepts more like vectors. I agree with that characterization. I'm curious what % of folks you've scanned may be in this non-word mode, or if the text and voice requirement forces everyone into words. | ||||||||
| ▲ | clemvonstengel 14 hours ago | parent [-] | |||||||
I agree that thinking in words is much slower than thinking in concepts would be -- that's the point of training models like this, so that ideally people can always just think in concepts. That said, we do need to get some kind of ground truth of what they're thinking in order to train the model, so we do need them to communicate that (in words). One thing that's particularly exciting here is that the model often gets the high-level idea correct, without getting any words correct (as in some of the examples above), which suggests that it is picking up the idea rather than the particular words. | ||||||||
| ||||||||