▲ | iExploder 2 days ago | |||||||
if you think about it the resource wasteful approach pre-LLM didnt really make sense (thousands of people often times re-implementing similar use cases). LLMs are like global open source repositories of everything known to mankind with search on steroids. we can never go back, if only for this one reason (imagine of how many hours of peoples lives were lost implementing the same function, or yet another CRUD app)... so if we cant go back whats next? the paradigm is shifting from us not deciding how to do, but deciding what to do, maybe by writing requirements and constraints and letting AI figure out the details. the skill will be in using specific language with AI to get the desired behavior, old code as we know it is garbage, new code is writing requirements and constraints so in a sense we will not be reviewers, nor architects or designers, but writers of requirements and use cases in a specific LLM language, which will have its own but different challenges too there might be still a place for cream of the crop mega talented people to solve new puzzles (still with AI assistance) in order to generate new input knowledge to train LLMs | ||||||||
▲ | mfalcon a day ago | parent [-] | |||||||
I think that we will be reviewers too, we have to know if the AI generated artifact does what we want. | ||||||||
|