| |
| ▲ | freedomben a day ago | parent [-] | | > In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use (thereby reducing the amount needed for a single application); however, as the cost of using the resource drops, if the price is highly elastic, this results in overall demand increasing, causing total resource consumption to rise. Governments have typically expected efficiency gains to lower resource consumption, rather than anticipating possible increases due to the Jevons paradox.[1] I do think there will be some Jevons effect going on with this, but I think it's important to recognize that software development as a resource is different than something like coal. For example, if the average iPhone-only teenager can now suddenly start cranking out apps, that may ultimately increase demand for apps and there may be more code than ever getting "written," but there won't necesarily be a need for your CS-grad software engineer anymore, so we could still be fucked. Why would you pay a high salary for a SWE when your business teams can just generate whatever app they need without having to know anything about how it actually works? I think the arguments about "AI isn't good enough to replace senior engineers" will hold true for a few years, but not much beyond that. Jevon's Paradox will probably hold true for software as a resource, but not for SWEs as a resource. In the coal scenario, imagine that coal gets super cheap to procure because we invent robots that can do it from alpha to omega. Coal demand may go up, but the job for the coal miner is toast, and unless that coal miner has ownership stake, they will be out on their ass. [1] https://en.wikipedia.org/wiki/Jevons_paradox | | |
| ▲ | mxkopy a day ago | parent [-] | | The coal miner would have to pivot to being someone who knows a lot about coal instead of someone that actually obtained it, they’d become more of a coal-advisor to the person making decisions about what type of or how much coal to get/what’s even possible with the coal they’re getting. The future I’m seeing with AI is one where software (i.e. as a way to get hardware to do stuff) is basically a non-issue. The example I wanna work on soon is telling Siri I want my iPhone to work as a touchpad for my computer and have the necessary drivers for that to happen be built automatically because that’s a reasonable thing I could expect my hardware to do. That’s the sort of thing that seems pretty achievable by AI in a couple turns that would take a single dev a year or two. And the thing is, I can’t imagine a software dev that doesn’t have some set of skills that are still applicable in this future, either through general CS skills (knowing what’s within reasonable expectations of hardware, being able to effectively describe more specific behavior/choosing the right abstractions etc) or other more nebulous technical knowledge (e.g. what you want to do with hardware in the first place). Another thing I will mention is that for things like the iPhone example from earlier, there are usually a lot of optimizations or decisions involved that are derived from the user’s experience as a human which the LLM can’t really use synthetically. As another example if I turned my phone into a second monitor the LLM might generate code that sends full resolution images to the phone when the phone’s screen is much lower, there’s no real point for it to optimize that away if it doesn’t know how eyes work and what screens are used for. So at some point it needs to involve a model of a human, at least for examples like these. | | |
| ▲ | freedomben a day ago | parent [-] | | > The coal miner would have to pivot to being someone who knows a lot about coal instead of someone that actually obtained it, they’d become more of a coal-advisor to the person making decisions about what type of or how much coal to get/what’s even possible with the coal they’re getting. I definitely agree that there will be some jobs/roles like that, and it won't be 100% destruction of SWEs (and many other gigs that will be affected), but I can't imagine that more than a small percentage of consultants will be needed. The top 10% of engineers I think will be just fine for the reasons you've said, but at the lower levels it will be a blood bath (and realistically maybe it should as there are plenty of SWEs that probably shouldn't be writing code that matters, but that feels like a separate discussion). Your point about other skills/knowledge is good too, though I suspect most white collar jobs are on the chopping block too, just maybe shortly behind. Your future is one that I'm dreaming about too (although I have a hard time believing Apple would allow you to do that, but on Android or some future 3rd option it might be possible). Especially as a Linux user there have been plenty of times I've thought of cool stuff that I'd love to have personally that would take me months of work to build (time I've accepted I'll never have until my kids are all out of the house at least haha). I'm also dreaming of a day when I can just ask the AI to produce more seasons of Star Trek TOS, Have Gun - Will Travel, The Lieutenant, and many other great shows that I'm hungry for more, and have it crank them out. That future would be incredible! But that feels like the smooth side of the sword, and avoiding a deep cut from the sharp side feels increasingly important. Hopefully it will solve itself but seeing the impacts so far I'm getting worried. I appreciate the discussion and optimism! There is too much AI doomerism out there and the upsides (like you've mentioned) don't get talked about enough I think. | | |
| ▲ | thermodynot a day ago | parent [-] | | Computers are not special. They are just a heat engine like everything else. We feed them concentrated energy that they dissipate to do work. They do work on data: we give it data (some of it is called code) and it gives us back data. It's all about the information content, how does that data communicate something and relate to the world? "Training" is just upfront work. Why on Earth people expect to get from the machine that processes data some novel information that did not exist before? This whole fantasy hinges on not understand the sheer amount of data these LLMs are being trained on, and some magical thinking about it producing some novel information ex nihilo somehow. I will never understand how intelligent people fall into this patterns of thought. We can only get from computers what we put into them. |
|
|
|
|