▲ | burningion a day ago | |
So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers". If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models. I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems. Will an LLM model read a person's mind about what they want to build better than they can communicate? That's already what recommender systems (like the TikTok algorithm) do. But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering? I think that's where there's a gap in (basically) belief systems of the future. If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless. This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us. |