| ▲ | pron 12 hours ago |
| > This is the future. It is inevitable. "This" does a lot of unjustifiable work here. "This" refers to your successful experience which, I assume, involved a program no larger than a few tens of thousands lines of code, if that, and it saved you only a few hours of work. The future you're referring to, however, is an extrapolation of "this", where a program writes arbitrary programs for us. Is that future inevitable? Possibly, but it's not quite "this", as we can't yet do that, we don't know when we'll be able to, and we don't know that LLMs are what gets us there. But If we're extrapolating from relatively minor things we can do today to big things we could do in the future, I would say that you're thinking too small. If program X could write program Y for us, for some arbitrary Y, why would we want Y in the first place? If we're dreaming about what may be possible, why would we need any program at all other than X? Saying that that is the inevitable future sounds to me like someone, at the advent of machines, saying that a future where machines automatically clean the streets after our horses is the inevitable future, or perhaps one where we're carried everywhere on conveyor belts. Focusing on LLMs is like such a person saying that in the future, everything will inevitably be powered by steam engines. In the end, horses were replaced wholesale, but not by conveyor belts, and while automation carried on, it wasn't the steam engine that powered most of it. |
|
| ▲ | roxolotl 12 hours ago | parent | next [-] |
| Absolutely couldn’t agree more. Incredibly useful tools are, in fact, incredibly useful. These discussions get clouded though when we intentionally ignore what’s being said by those doing the investing. The inevitability here isn’t that they’ll save 30% of dev time and we’ll get better software with less employees. It’s that come 2030, hell there’s that 2027 paper even, LLMs will be more effective than people at most tasks. Maybe at some point that’ll happen but looking at other normal technology[0] it takes decades. 0: https://knightcolumbia.org/content/ai-as-normal-technology |
| |
| ▲ | loudmax 11 hours ago | parent [-] | | Looking at the rollout of the internet, it did take decades. There was a lot of nonsensical hype in the dotcom era, most famously pets.com taking out an ad during the Superbowl. Most of those companies burned through their VC and went out of business. Yet here we are today. It's totally normal to get your pet food from chewy.com and modern life without the internet is unimaginable. Today we see a clear path toward machines that can take on most of the intellectual labor that humans do. Scott Alexander's 2027 time frame seems optimistic (or pessimistic, depending on how you feel about the outcome). But by say 2037? The only way that vision of the future doesn't come true is economic collapse that puts us back to 20th century technology. Focusing on whether the technology is LLMs or diffusion models or whatever is splitting hairs. | | |
| ▲ | roxolotl 10 hours ago | parent [-] | | Timelines are relevant though. Inevitability is only a useful proposition if the timeline is constrained. It is inevitable that the earth will be swallowed by the sun but rightfully no one gives a shit. I think most people, even the author of this piece, aside from those who believe there's something fundamental about human intelligence that isn't reproducible, would say AI is inevitable on a long enough timeline. The arguments being made though are that AI is inevitable in the short term. Is 12 years short term? Maybe? Regardless though when we break down the timelines we start to enable useful conversations. It's one thing to argue with a frame of "over X period of time Y will happen". It's another to say "it's inevitable so get on board". This piece, myself, and many others are frustrated by the latter. |
|
|
|
| ▲ | jstummbillig 11 hours ago | parent | prev | next [-] |
| > where a program writes arbitrary programs for us That seems like a strange requirement and I am not sure where you are getting it from. Programs are not arbitrary, and software design is something you will need to do at some level; you need to at least be able to describe the problem you are having and getting that right has been the hardest part of software development for a long time. |
| |
| ▲ | pron 11 hours ago | parent [-] | | In this case, by "arbitrary" I meant anything we would ask of it. But I don't understand why a machine that is able to reliably write code would be unable to reliably design software. Currenly, LLMs do neither, but if we're imagining what they could do some day, I don't know why we'd think it could do one but not the other. And a machine that can reliably write code can also probably reliably run a company as well as if not better than a human CEO. | | |
| ▲ | jstummbillig 10 hours ago | parent [-] | | Fair enough! I would wager that shaping what we ask of it will become more important, remain non-trivial, and good software will integrate software design and company design beyond what it is today. Someone or something has to bring a vision and a reason why the thing is being done at all. I imagine as long as taste exists, that will involve humans at some level. |
|
|
|
| ▲ | alanbernstein 12 hours ago | parent | prev [-] |
| Just try to imagine what you would have thought about this technology if you saw it with no warning, 10 years ago. Would "a few tens of thousands of lines of code" still seem small? |
| |
| ▲ | pron 11 hours ago | parent | next [-] | | I'm not saying it's not impressive or that it doesn't show great promise, but there are clearly challenges, and we don't yet know when or how they'll be solved. From some big LLM fans I've heard that one major problem is that of trust: Unlike tools/machines, LLMs cannot be trusted to reliably succeed or fail in an obvious way; unlike people, LLMs cannot be trusted to communicate back useful feedback, such as important insights or pitfalls. So while in some respects LLMs are superior to both humans and existing automation, in others they're inferior to both. Maybe we'll be able to fix these problems within the current LLM technology, and maybe we'll be able to do that soon, but neither of these is obviously inevitable. My pet issue with one form of inevitability, as I mentioned above, is that if we get to a point where software can reliably write other software for us, then we're also at a point where we don't need any of other software to be actually written, at least not in some human-readable form. There will just be one (kind of) program that does what we ask it to; why would we ask it to write programs? | | | |
| ▲ | badRNG 11 hours ago | parent | prev [-] | | The OG ChatGPT released less than three years ago. Prior to that, 20 lines of code would seem wild. Does anyone remember leetcode? |
|