| ▲ | embedding-shape 2 hours ago | ||||||||||||||||||||||
> I’m not even sure whether this is possible. Based on what's happened so far, maybe. At least that's exactly how we got to the current iteration back in 2022/2023, quite literally "lets see what happens when we throw an enormous amount data at them while training" worked out up until one point, then post-training seems to have taken over where labs currently differ. | |||||||||||||||||||||||
| ▲ | drob518 an hour ago | parent [-] | ||||||||||||||||||||||
Right, but we played the scaling card and it worked but is now reaching limits. What is the next card? You can surely argue that we can find a new one at any time. That’s the definition of a breakthrough. I just don’t see one at the moment. | |||||||||||||||||||||||
| |||||||||||||||||||||||