| ▲ | patapong 5 hours ago | |
I think this is a very important debate, and I think the author here adds a lot to this discussion! I mostly agree with it, but wanted to point out a few areas where I do not fully agree. > Take away the agent, and Bob is still a first-year student who hasn't started yet. This may be true, but I can see almost no conceivable word where the agent will be taken away. I think we should evaluate Bob's ability based on what he can do with an agent, not without, and here he seems to be doing quite well. > I've been hearing "just wait" since 2023. On almost any timeline, this is very short. Given the fact that we have already arrived at models able to almost build complete computer programs based on a single prompt, and solve frontier level math problems, I think any framework that relies on humans continuing to have an edge over LLMs in the medium term may be built on shaky grounds. Two very interesting questions today in this vein for me are: - Is the best way to teach complex topics to students today to have them carry out simple tasks? The author acknowledges that the difference between Bob and Alice only materializes at a very high level, basically when Alice becomes a PI of her own. If we were solely focused on teaching thinking at this level (with access to LLMs), how would we frame the educational path? It may look exactly like it does now, but it could also look very differently. - Is there inherent value in humans learning specific skills? If we get to a stage where LLMs can carry out most/all intellectual tasks better than humans, do we still want humans to learn these skills? My belief is yes, but I am frankly not sure how to motivate this answer. | ||
| ▲ | ThrowawayR2 an hour ago | parent [-] | |
> "no conceivable word where the agent will be taken away" LLM access is a paid service. HN concerns itself with inequality constantly and it's not inconceivable that some individuals get ahead because they can afford to pay for more tokens and better models than those who are poorer. | ||