| ▲ | lossolo 4 hours ago | ||||||||||||||||
What's funny is that most of this "progress" is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that. | |||||||||||||||||
| ▲ | Davidzheng 4 hours ago | parent | next [-] | ||||||||||||||||
"post-training shaping the models behavior" it seems from your wording that you find it not that dramatic. I rather find the fact that RL on novel environments providing steady improvements after base-model an incredibly bullish signal on future AI improvements. I also believe that the capability increase are transferring to other domains (or at least covers enough domains) that it represents a real rise in intelligence in the human sense (when measured in capabilities - not necessarily innate learning ability) | |||||||||||||||||
| |||||||||||||||||
| ▲ | WarmWash 4 hours ago | parent | prev | next [-] | ||||||||||||||||
>There is no moat besides that. Compute. Google didn't announce $185 billion in capex to do cataloguing and flash cards. | |||||||||||||||||
| |||||||||||||||||
| ▲ | riku_iki an hour ago | parent | prev [-] | ||||||||||||||||
> is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that. sure, but acquiring/generating/creating/curating so much high quality data is still significant moat. | |||||||||||||||||