▲ | nojs 13 hours ago | |||||||||||||||||||||||||
I think the author is missing the point of why these forecasts put so much weight on software engineering skills. It’s not because it’s a good measure of AGI in itself, it’s because it directly impacts the pace of further AI research, which leads to runaway progress. Claiming that the AI can’t even read a child’s drawing, for example, is therefore not super relevant to the timeline, unless you think it’s fundamentally never going to be possible. | ||||||||||||||||||||||||||
▲ | croes 10 hours ago | parent | next [-] | |||||||||||||||||||||||||
Or you just reach the limit faster. Research is a like a maze, going faster on the wrong track doesn't bring you to the exit. | ||||||||||||||||||||||||||
▲ | refulgentis 12 hours ago | parent | prev [-] | |||||||||||||||||||||||||
If I gave OpenAI 100K engineers today, does that accelerate their model quality significantly? I generally assumed ML was compute constrained, not code-monkey constrained. i.e. I'd probably tell my top N employees they had more room for experiments rather than hire N + 1, at some critical value N > 100 and N << 10000. | ||||||||||||||||||||||||||
|