| ▲ | root_axis 12 hours ago |
| I don't think we have much to worry about in terms of economic disruption. At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental. Even in the software world, the effect of being able to build software a lot faster isn't really leading to a fundamentally different software landscape. Yes, you can now pump out a month's worth of CRUD in a couple days, but ultimately it's just the same CRUD, and there's no reason to expect that this will change because of LLMs. Of course, creative people with innovative ideas will be able to achieve more, a talented engineer will be able to embark on a project that they didn't have the time to build before, and that will likely lead to some kind of software surplus that the economy feels on the margins, but in practical terms the economy will continue to chug along at a sustained pace that's mostly inline with e.g. economic projections from 10 years ago. |
|
| ▲ | jonas21 12 hours ago | parent | next [-] |
| > At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental. Even just a year ago, most people thought the practical effects in software engineering were incremental too. It took another generation of models and tooling to get to the point where it could start having a large impact. What makes you think the same will not happen in other knowledge-based fields after another iteration or two? |
| |
| ▲ | marcosdumay 11 hours ago | parent | next [-] | | > most people thought the practical effects in software engineering were incremental too Hum... Are you saying it's having clear positive (never mind "transformative") impact somewhere? Can you point any place we can see observable clear positive impact? | | |
| ▲ | Ozzie_osman a minute ago | parent | next [-] | | I know many companies that have replaced Customer Support agents with LLM-based agents. Replacing support with AI isn't new, but what is new is that the LLM-based ones have higher CSAT (customer satisfaction) rates than the humans they are now replacing (ie, it's not just cost anymore... It's cost and quality). | |
| ▲ | vjvjvjvjghv 7 hours ago | parent | prev [-] | | It doesn’t need to provide “ observable clear positive impact”. As long as the bosses think it improves numbers, it will be used. See offshoring or advertising everywhere. |
| |
| ▲ | root_axis 12 hours ago | parent | prev [-] | | Software is more amenable to LLMs because there is a rich source of highly relevant training data that corresponds directly to the building blocks of software, and the "correctness" of software is quasi-self-verifiable. This isn't true for pretty much anything else. | | |
| ▲ | dpflan 11 hours ago | parent | next [-] | | The more verifiable the domain the better suited. We see similar reports of benefits from advanced mathematics research from Terrence Tao, granted some reports seem to amount to very few knew some data existed that was relevant to the proof, but the LLM had it in its training corpus. Still, verifiably correct domains are well-suited. So the concept formal verification is as relevant as ever, and when building interconnected programs the complexity rises and verifiability becomes more difficult. | | |
| ▲ | root_axis 10 hours ago | parent | next [-] | | > The more verifiable the domain the better suited. Absolutely. It's also worth noting that in the case of Tao's work, the LLM was producing Lean and Python code. | |
| ▲ | 2001zhaozhao 10 hours ago | parent | prev [-] | | I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems. Potentially tens of thousands of words of instructions to get the LLM to act as a competent employee in the field. Then the models need to be good enough at instruction-following to actually explore the problem in the right way and apply basic intelligence to solve it. Basically treating the LLM as a competent general knowledge worker that is unfamiliar with the specific field, and giving it detailed instructions on how to succeed in this field. For the easy-to-verify fields like coding, you can train "domain intuitions" directly to the LLM (and some of this training should generalize to other knowledge work abilities), but for other fields you would need to supply them in the prompt as the abilities cannot be trained into the LLM directly. This will need better models but might become doable in a few generations. | | |
| ▲ | root_axis 6 hours ago | parent [-] | | > I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems Using LLMs to validate LLMs isn't a solution to this problem. If the system can't self-verify then there's no signal to tell the LLM that it's wrong. The LLM is fundamentally unreliable, that's why you need a self-verifying system to guide and constrain the token generation. |
|
| |
| ▲ | fc417fc802 11 hours ago | parent | prev [-] | | Presumably at some point capability will translate to other domains even if the exchange rate is poor. If it can autonomously write software and author CAD files then it can autonomously design robots. I assume everything else follows naturally from that. | | |
| ▲ | root_axis 6 hours ago | parent [-] | | > If it can autonomously write software and author CAD files then it can autonomously design robots. It can't because the LLM can't test its own design. Unlike with code, the LLM can't incrementally crawl its way to a solution guided by unit tests and error messages. In the real world, there are material costs for trial and error, and there is no CLI that allows every aspect of the universe to be directly manipulated with perfect precision. | | |
| ▲ | fc417fc802 3 hours ago | parent [-] | | You don't need perfect precision, just a sufficiently high fidelity simulation. For example hypersonic weapon design being carried out computationally was the historical reason (pre AI) to restrict export of certain electronics to China. OpenAI demoed training a model for a robotic hand using this approach years ago. |
|
|
|
|
|
| ▲ | j33dd 7 hours ago | parent | prev | next [-] |
| Agreed. I also believe the impact on producing software is also over-hyped and in the long term there will be a pull-back in the usage of the tools as the negative effects are figured out. The unfortunate truth (for Amodei) is you cant automate true creativity and nor standardise taste. Try as they might. |
|
| ▲ | 12 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | cubefox 8 hours ago | parent | prev [-] |
| > I don't think we have much to worry about in terms of economic disruption. At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental. You clearly didn't read the post. He is talking about AI that is smarter than any human, not today's LLMs. The fact that powerful AI doesn't exist yet doesn't mean there is nothing to worry about. |
| |
| ▲ | root_axis 5 hours ago | parent [-] | | > You clearly didn't read the post This kind of petty remark is like a reverse em dash. Greetings fellow human. Anyway, I did read it. The author's description of a future AI is basically just a more advanced version of LLMs > By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties: They then go on to list several properties that meet their definition, but what I'm trying to explain in my comment is that I don't accept them all at face value. I think it's fair to critique from that perspective since the author explicitly modeled their future based on today's LLMs, unlike many AI essays that skip straight to the super intelligence meme as their premise. | | |
| ▲ | cubefox 4 hours ago | parent [-] | | > They then go on to list several properties that meet their definition No, these properties are part of his definition. To say that we have nothing to worry about because today's LLMs don't have these properties misses the point. |
|
|