▲ | boxed 14 hours ago | |||||||
> This is important because I would guess that software engineering skills overestimate total progress on AGI because software engineering skills are easier to train than other skills. This is because they can be easily verified through automated testing so models can iterate quite quickly. This is very different from the real world, where tasks are messy and involve low feedback — areas that AI struggles on Tell me you've never coded without telling me you've never coded. | ||||||||
▲ | nopinsight 13 hours ago | parent | next [-] | |||||||
> software engineering skills are easier to train than other skills. I think the author meant it's easier to train (reasoning) LLM models on [coding] skills than most other tasks. I agree on that. Data abundance, near-immediate feedback, and near-perfect simulators are why we've seen such rapid progress on most coding benchmarks so far. I'm not sure if he included high-level software engineering skills such as designing the right software architecture for a given set of user requirements in that statement. --- For humans, I think the fundamentals of coding are very natural and easy for people with certain mental traits, although that's obviously not the norm (which explains the high wages for some software engineers). Coding on large, practical software systems is indeed much more complex with all the inherent and accidental complexity. The latter helps explain why AI agents for software engineering will require some human involvement until we actually reach full-fledged AGI. | ||||||||
| ||||||||
▲ | zurfer 11 hours ago | parent | prev [-] | |||||||
Same is true for chess. Easy for computers, hard for humans. Every smartphone has enough compute to beat the best human chess player in the world. |