▲ | ben_w 7 months ago | |
For me, even having an infinitely patient intern who is quite often wrong is useful. Even when they're only acting as competently as a university student, they're doing that at dozens of subjects which I stopped doing at 16, plus some more I stopped at 18*, and they know at least some stuff about subjects I never studied like the Swedish or Oromo languages, or agriculture, or the history of Djibouti — though I have yet to find a professional use for all this breadth of knowledge, the fact they're so general means I don't have to worry, just ask and see what they come up with. Even in my own domain, software development, I can take their mediocre code as a starting point to build upon because the better LLMs know all the different libraries that are meant to help with different tasks in the main languages, all the things that changed in python or CSS since I last used them professionally, even a few things I've never learned about iOS despite having been in that sub-field since 2010 — and sure, the code ChatGPT produces is merely OK, not great, but I can fix that up just like I can actually work with humans whose code or architecture I don't approve of**. And when I think the AI is wrong about code, 1 time in 10 it's actually me that made the mistake — so a learning opportunity. Back in 2004-2005, I was an intern; if I remember right I was paid £1000/month, and if I was indeed useful at that point, I would expect the best LLMs today to be similarly valuable (inflation adjusted) despite the much lower cost. * https://en.wikipedia.org/wiki/GCSE and https://en.wikipedia.org/wiki/A-level ** well… mostly. There's been a few humans so bad I gave up in disgust, for example because they were duplicating files instead of sub-classing and then not understanding the criticism they got for doing so — but even that took over a year for me to give up on them, despite the warning signs being visible in the first week. |