| ▲ | jennyholzer3 10 hours ago | ||||||||||||||||||||||||||||
I don't know about all this AI stuff. How are LLMs going to stay on top of new design concepts, new languages, really anything new? Can LLMs be trained to operate "fluently" with regards to a genuinely new concept? I think LLMs are good for writing certain types of "bad code", i.e. if you're learning a new language or trying to quickly create a prototype. However to me it seems like a security risk to try to write "good code" with an LLM. | |||||||||||||||||||||||||||||
| ▲ | sgk284 10 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
I suspect it will still fall on humans (with machine assistance?) to move the field forward and innovate, but in terms of training an LLM on genuinely new concepts, they tend to be pretty nimble on that front (in my experience). Especially with the massive context windows modern LLMs have. The core idea that the GPT-3 paper introduced was (summarizing): | |||||||||||||||||||||||||||||
| ▲ | rabf 9 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
You do realise they can search the web? They can read documentation and api specs? | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | manmal 9 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
They are retrained every 12-24 months and constantly getting new/updated reinforcement learning layers. New concepts are not the problem. The problem is outdated information in the training data, like only crappy old Postgres syntax in most of the Stackoverflow body. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||