| ▲ | twodave 2 days ago |
| Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more: 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
|
|
| ▲ | reeredfdfdf 2 days ago | parent | next [-] |
| "The only reason to reduce headcount is to remove people who already weren’t providing much value." I wish corporations really acted this rationally. At least where I live hospitals fired most secretaries and assistants to doctors a long time ago. The end result? High-paid doctors spending significant portion of their time on administrative and bureaucratic tasks that were previously handled by those secretaries, preventing them from seeing as many patients as they otherwise would. Cost savings may look good on spreadsheet, but really the overall efficiency of the system suffered. |
| |
| ▲ | ehnto 2 days ago | parent | next [-] | | That's what I see when companies cut juniors as well. AI cannot replace a junior because a junior has full and complete agency, accountability, and purpose. They retain learning and become a sharper bespoke resource for the business as time goes on. The PM tells them what to do and I give them guidance. If you take away the juniors, you are now asking your seniors to do that work instead which is more expensive and wasteful. The PM cannot tell the AI junior what to do for they don't know how. Then you say, hey we also want you to babysit the LLM to increase productivity, well I can't leave a task with the LLM and come back to it tomorrow. Now I am wasting two types of time. | | |
| ▲ | jack_pp a day ago | parent | next [-] | | > well I can't leave a task with the LLM and come back to it tomorrow You could actually just do that, leave an agent on a problem you would give a junior, go back on your main task and whenever you feel like it check the agent's work. | | |
| ▲ | ehnto 15 hours ago | parent [-] | | It lacks the ability to self correct and do all the adjacent tasks like client comms etc. So if I come back to it in the afternoon I may have wasted a day in business terms, because I will need to try again tomorrow. What do I tell the client, sorry the LLM failed the simple task so we will have to try again tomorrow? Worse, lie and say sorry this 2 hour task could not be achieved by our developers today. Either way we look incompetent (because realistically, we were not competent, relying on a tool that fails frequently) | | |
| ▲ | jack_pp 14 hours ago | parent [-] | | I'm sorry but I'm not familiar with the context you mention, have not worked in a job where I had to communicate with clients and I find it hard to imagine a job where a junior would have to communicate with a client on a 2 hour task. Why would you want a junior to be the public face of your company? |
|
| |
| ▲ | htrp a day ago | parent | prev [-] | | that sounds like a pm problem |
| |
| ▲ | kylinhacker 2 days ago | parent | prev | next [-] | | I'm a full-stack developer, Recently i find that almost 90% of my work deadlines have been brought forward, and the bosses' scheduling has become stricter. the coworker who is particularly good at pair programming with AI prefers to reduce his/her scheduling(kind of unconsciously)。Work is sudden,but salary remains steady。what a bummer | |
| ▲ | listenallyall 2 days ago | parent | prev [-] | | But wouldnt these spreadsheets be tracking something like total revenue? If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes? I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not. | | |
| ▲ | gwd 2 days ago | parent | next [-] | | First of all, it's not unlikely that the dentist is the owner. And in any case, when you have a small system of less than 150 people, it's easy enough for a handful of people to see what's actually going on. Once you get to something in the thousands or tens of thousands, you just have spreadsheets; and anything that doesn't show up in that spreadsheet might as well not exist. Furthermore, you have competing business units, each of which want to externalize their costs to other business units. Very similar to what GP described -- when I was in a small start-up, we had an admin assistant who did most of the receipt entry and what-not for our expense reports; and we were allowed to tell the company travel agent our travel constrants and give us options for flights. When we were acquired by a larger company, we had to do our own expense reports, and do our own flight searches. That was almost certainly a false economy. And then when we became a major conglomerate, at some point they merged a bunch of IT functions; so the folks in California would make a change and go home, and those of us in Europe or the UK would come in to find all the networks broken, with no way to fix it until the people in California started coming in at 4pm. In all cases, the dollars saved is clearly visible in the spreadsheet, while the "development velocity" lost is noisy, diffuse, and hard to quantify or pin down to any particular cause. I suppose one way to quantify that would be to have the Engineering function track time spent doing admin work and charge that to the Finance function; and time spent idle due to IT outages and charge that to the IT department. But that has its own pitfalls, no doubt. | | |
| ▲ | listenallyall a day ago | parent [-] | | Problem with this analogy is that software development != revenue. The developers and IT are a cost center. So yea in a huge org one of the goals is to reduce costs (admin) spent on supporting a cost center. Doctors generate revenue directly and it can all be traced, so even an extra 20 minutes out of their day doing admin stuff instead of one more patient or procedure is easily noticeable, and affects revenue directly. | | |
| ▲ | gwd 13 hours ago | parent [-] | | You mean, there's a 1-1 correlation between the amount of pointless admin a doctor has to do and the number of patients he sees (and thus the revenue of the clinic). It should be visible on the spreadsheet. Whereas, there's not a 1-1 correlation between the pointless admin a software engineer has to do and the number of paying customers a company gets. But then, why do large orgs try to "save costs" by having doctors do admin work? Somehow the wrong numbers get onto the spreadsheet. Size of the organization -- distance between the person looking at the spreadsheet and the reality of people doing the work -- likely plays a big part in that. |
|
| |
| ▲ | Eisenstein a day ago | parent | prev [-] | | > If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes? I am going to assume that the Doctors are just working longer hours and/or aren't as attentive as they could be and so care quality declines but revenue doesn't. Overworking existing staff in order to make up for less staff is a tried and true play. > I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not. By conflating 'Doctors' and 'Dentists' you are basically saying the equivalent of 'all Doctors' and 'Doctors of a certain specialty'. Dentists are 'Doctors for teeth' like a pediatrician is a 'Doctor for children' or an Ortho is a 'Doctor for bones'. Teeth need maintenance, which is the time consuming part of most visits, and the Dentist has staff to do that part of it. That in itself makes the specialty not really that comparable to a lot of others. | | |
| ▲ | htrp a day ago | parent | next [-] | | I feel like that's how you get Microsoft where each division has a gun pointed at the other division | |
| ▲ | listenallyall a day ago | parent | prev [-] | | Doesn't really matter the type of doctor, spending all their time on revenue-generating activities would seem to be better than only 75% generating revenue and 25% on "administrative and bureaucratic tasks" that don't generate revenue and which could be accomplished by a much lower-paid employee ("secretaries and assistants"). Perhaps you're correct that the doctors are simply working much longer hours but that's one group of employees among a hospital's staff who do generally have a lot of power and aren't too easy to make extraordinary demands of. |
|
|
|
|
| ▲ | shaka-bear-tree 2 days ago | parent | prev | next [-] |
| Funny the original post doesn’t mention AI replacing the coding part of his job. There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it. I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption. I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in. |
| |
| ▲ | kace91 a day ago | parent | next [-] | | >There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it. Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull). The most interesting questions are the ones that assume human equivalency. Suppose an AI can produce like a human. Are you ok with merging that code without human review? Are you ok with having a codebase that is effectively a black box? Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes? Are you ok with being dependent on the company providing this code generation? Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them? Will we be ok if the well of public technical discussion LLMs are feeding from dries up? Those are the interesting debates I think. | | |
| ▲ | Symmetry a day ago | parent | next [-] | | > Are you ok with having a codebase that is effectively a black box? When was the last time you looked at the machine code your compiler was giving you? For me, doing embedded development on an architecture without a mature compiler the answer is last Friday but I expect that the vast majority of readers here never look at their machine code. We have abstraction layers that we've come to trust because they work in practice. To do our work we're dependent on the companies that develop our compilers where we can at least see the output, but also companies that make our CPUs which we couldn't debug without a huge amount of specialized equipment. So I expect that mostly people will be ok with it. | | |
| ▲ | kace91 a day ago | parent [-] | | >When was the last time you looked at the machine code your compiler was giving you? You could rephrase that as “when was the last time your compiler didn’t work as expected?”. Never in my whole career in my case. Can we expect that level of reliability? I’m not making the argument of “the LLM is not good enough”. that would brings us back to the boring dissuasion of “maybe it will be”. The thing is that human langauge is ambiguous and subject to interpretation, so I think we will have occasionally wrong output even with perfect LLMs. That makes black box behavior dangerous. | | |
| ▲ | Symmetry a day ago | parent [-] | | We certainly can't expect that with LLMs now but neither could compiler users back in the 1970s. I do agree that we probably won't ever have them generating code without more back and forth where the LLM complains that its instructions were ambiguous and then testing afterwards. |
|
| |
| ▲ | etherlord a day ago | parent | prev | next [-] | | I dont think it really matters if your or I or regular people are ok with it if the people with power are. There doesnt seem to be much any of us regular folks can do to stop it, especially as Ai eliminates more and more jobs thus further reducing the economic power of everyday people | | |
| ▲ | kace91 a day ago | parent [-] | | I disagree. There are personal decisions to make: Do you bet on keeping your technical skills sharpened, or stop and focus on product work and AI usage? Do you work for companies that go full AI or try to find one that stays “manual”? What advice do you offer as a technical lead when asked? Leadership ignoring technical advice is nothing new, but there is still value in figuring out those questions. | | |
| ▲ | bluefirebrand 2 hours ago | parent [-] | | > What advice do you offer as a technical lead when asked Learn to shoot a gun and grow your own food, that's my advice as a technical lead right now |
|
| |
| ▲ | listenallyall a day ago | parent | prev [-] | | Have you ever double-checked (in human fashion, not just using another calculator) the output from a calculator? When calculators were first introduced I'm sure some people such as scientists and accountants did exactly that. Calculators were new, people likely had to be slowly convinced that these magic devices could be totally accurate. But you and I were born well after the invention of calculators, our entire lives nobody has doubted that even a $2 calculator can immediately determine the square root of an 8-digit number and be totally accurate. So nobody verifies, and also, a lot of people can't do basic math. |
| |
| ▲ | torginus 2 days ago | parent | prev | next [-] | | I predict by March 2026, AI will be better at writing doomer articles about humans being replaced than top human experts. | | | |
| ▲ | twodave a day ago | parent | prev | next [-] | | Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it. Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be. Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it. | |
| ▲ | jakewins 2 days ago | parent | prev | next [-] | | I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations. If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”. Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem
I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”. The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs | | |
| ▲ | NotOscarWilde a day ago | parent | next [-] | | > Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect Fully agree, in fact, this has literally happened to me a week ago -- ChatGPT was confidently incorrect about its simple lock structure for my multithreaded C++ program, and wrote paragraphs upon paragraphs about how it works, until I pressed it twice about a (real) possibility of some operations deadlocking, and then it folded. > Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations. As an university assistant professor trying to keep up with AI while doing research/teaching as before, this also happens to me and I am dismayed by that. I am certain there are models out there that can solve IMO and generate research-grade papers, but the ones I can get easy access to as a customer routinely mess up stuff, including: * Adding extra simplifications to a given combinatorial optimization problem, so that its dynamic programming approach works. * Claiming some inequality is true but upon reflection it derived A >= B from A <= C and C <= B. (This is all ChatGPT 5, thinking mode.) You could fairly counterclaim that I need to get more funding (tough) or invest much more of my time and energy to get access to models closer to what Terrence Tao and other top people trying to apply AI in CS theory are currently using. But at least the models cheap enough for me to get access as a private person are not on par with what the same companies claim to achieve. | |
| ▲ | empiricus a day ago | parent | prev [-] | | I agree that the current models are far from perfect. But I am curious how you see the future. Do you really think/feel they will stop here? | | |
| ▲ | jakewins a day ago | parent [-] | | I mean, I'm just some guy, but in my mind: - They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago - It's clearly possible to solve this, since we humans exist and our brains don't have this problem There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training. The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training. There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells. Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology. The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon. That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs. | | |
| ▲ | xmcqdpt2 9 hours ago | parent | next [-] | | The complexity of actual biological neural networks became clear to me when I learned about the different types of neurons. https://en.wikipedia.org/wiki/Neural_oscillation There are clock neurons, ADC neurons that transform analog intensity of signal into counts of digital spikes, there are neurons that integrate signals over time, that synchronizes together etc etc. Transformer models have none of this. | |
| ▲ | empiricus a day ago | parent | prev [-] | | Thanks, that's a reasonable argument. Some critique: based on this argument it is very surprising that LLM work so well, or at all. The fact that even small LLM do something suggests that the human substrate is quite inefficient for thinking. Compared to LLMs, it seems to me that 1. some humans are more aware of what they know; 2. humans have very tight feedback loops to regulate and correct. So I imagine we do not need much more scaling, just slightly better AI architectures. I guess we will see how it goes. |
|
|
| |
| ▲ | botanrice a day ago | parent | prev | next [-] | | idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients" It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet | |
| ▲ | zwnow 2 days ago | parent | prev [-] | | > There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it. Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves... | | |
| ▲ | tormeh 2 days ago | parent [-] | | This is what I also see. AI is used sparingly. Mostly for information lookup and autocomplete. It's just not good enough for other things. I could use it to write code if I really babysit it and triple check everything it does? Cool cool, maybe sometime later. | | |
| ▲ | kakacik 2 days ago | parent [-] | | Who does typical code sweat shops churning out one smallish app at a time and quickly moving on? Certainly not your typical company-hired permanent dev, they (us) drown in tons of complex legacy code that keeps working for past 10-20 years and company sees no reason to throw it away. Those folks that do churn out such apps, for them its great & horrible long term. For folks like me development is maybe 10% of my work, and by far the best part - creative, problem-solving, stimulating, actually learning myself. Why would I want to mildly optimize that 10% and loose all the good stuff, while speed wouldn't visibly even improve? To really improve speed in bigger orgs, the change would have to happen in processes, office politics, management priorities and so on. No help of llms there, if anything trend-chasing managers just introduce more chaos with negative consequences. |
|
|
|
|
| ▲ | frchalli 2 days ago | parent | prev | next [-] |
| > The only reason to reduce headcount is to remove people who already weren’t providing much value. There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter. All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling. I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing. |
| |
| ▲ | belorn a day ago | parent | next [-] | | The thing that replaces the old memos is not email, its meetings. It not uncommon for meetings with hundreds of participants that in the past would be a simple memo. It would be amazing if LLMs could replace the role that meetings has in communication, but somehow I strongly doubt that will happens. It is a fun idea to have my AI talk with your AI so no one need to actually communicate, but the result is more likely to create barriers for communication than to help it. | | | |
| ▲ | kalterdev 2 days ago | parent | prev | next [-] | | The crucial observation is the fact that automation has historically been a net creator of jobs, not destroyer. | | |
| ▲ | zarzavat 2 days ago | parent | next [-] | | Sure, if you're content to stack shelves. AI isn't automation. It's thinking. It automates the brain out of human jobs. You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place. | | |
| ▲ | ben_w a day ago | parent | next [-] | | > Sure, if you're content to stack shelves. Why this example? One of the things automation has done is reduce and replace stevedores, the shipping equivalent of stacking shelves. Amazon warehouses are heavily automated, almost self-stacking-shelves. At least, according to the various videos I see, I've not actually worked there myself. Yet. There's time. > AI isn't automation. It's thinking. It automates the brain out of human jobs.
You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place. Right up until the AI is good enough to control the robot that can do that job. Which may or may not be humanoid. (Plus side: look how long it's taking for self-driving cars, how often people think a personal anecdote of "works for me" is a valid response to "doesn't work for me"). Even before the AI gets that good, a nice boring remote-control android doing whatever manual labour could outsource the "controller" position to a human anywhere on the planet. Mental image: all the unemployed Americans protesting outside Tesla's factories when they realise the Optimus robots within are controlled remotely from people in 3rd world countries getting paid $5/day. | |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | ForHackernews 2 days ago | parent | prev | next [-] | | Yes, AI is automation. It automates the implementation. It doesn't (yet?) automate the hard parts around figuring out what work needs to be done and how to do it. The sad thing is that for many software devs, the implementation is the fun bit. | | | |
| ▲ | bigfishrunning a day ago | parent | prev [-] | | Except it isn't thinking. It is applying a model of statistical likelihood. The real issue is that it's been sold as thinking, and laypeople believe that it's thinking, so it is very likely that jobs will be eliminated before it's feasible to replace them. People that actually care about the quality of their output are a dying breed, and that death is being accelerated by this machine that produces somewhat plausible-looking output, because we're optimizing around "plausible-looking" and not "correct" |
| |
| ▲ | OkayPhysicist a day ago | parent | prev | next [-] | | That observation is only useful if you can point at a capability that humans have that we haven't automated. Hunter-Gatherers were replaced by the technology of Agriculture. Humans still are needed to provide the power to plow the earth and reap the crops. Human power was replaced by work animals pulling plows, but you only humans can make decisions about when to harvest. Jump forward a good long time, Computers can run algorithms to indicate when best to harvest. Humans are still uniquely flexible and creative in their ability to deal with unanticipated issues. AI is intended to make "flexible and creative" no longer a bastion of human uniquness. What's left? The only obvious one I can think of is accountability: as long as computers aren't seen as people, you need someone to be responsible for the fully automated farm. | |
| ▲ | _DeadFred_ a day ago | parent | prev [-] | | 'Because thing X happened in past it is guaranteed to happen in the future and we should bet society on it instead of trying to you know, plan for the future. Magic jobs will just appear, trust me' |
| |
| ▲ | jstanley a day ago | parent | prev [-] | | > At first, there were many people typing, then later [...] There were more people typing than ever before? Look around you, we're all typing all day long. | | |
| ▲ | kllamnjro a day ago | parent [-] | | I think they meant that there was a time when people’s jobs were: 1. either reading notes in shorthand, or reading something from a sheet that was already fully typed using a typewriter, or listening to recorded or live dictation 2. then typing that content out into a typewriter. People were essentially human copying machines. |
|
|
|
| ▲ | enduser 2 days ago | parent | prev | next [-] |
| This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect. |
| |
| ▲ | avereveard 2 days ago | parent | next [-] | | Overseeing robot is a time limited activity. Even building robot has a finite horizon. Current tech can't yet replace everything but many jobs already see the horizon or are at sunset. Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement. This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus. Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there? Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs. | | |
| ▲ | misnome 2 days ago | parent [-] | | > because end of the day people can use this tech to create wealth at unprecedented scale _Where?_ so far the only technology to have come out widespread for this is to shove a chatbot interface into every UI that never needed it. Nothing has been improved, no revelatory tech has come out (tools to let you chatbot faster don’t count). | | |
| ▲ | listenallyall a day ago | parent | next [-] | | Honestly, this comment sounds like someone dismissing the internet in 1992 when the web was all text-based and CompuServe was leading-edge. No "revelatory tech" just yet, but it was right around the corner. | |
| ▲ | avereveard a day ago | parent | prev [-] | | In the backend, not directly customer facing. Coca cola is two years in running ai ads. Lovable is cash positive, and many of the builder there are too. A few creators are earning a living with suno songs. Not millions mind but they can live off their ai works. If you dont see it happening around you, you're just not looking. | | |
| ▲ | misnome a day ago | parent [-] | | So, a company cutting costs, a tool to let you chatbot faster, and musical slop at scale. This doesn't sound like "creating wealth at unprecedented scale" |
|
|
| |
| ▲ | vlovich123 2 days ago | parent | prev | next [-] | | I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily. | | |
| ▲ | 9rx 2 days ago | parent | next [-] | | > Computers replaced humans as the best chess players, not computers with human oversight. Oh? I sat down for a game of chess against a computer and it never showed up. I was certain it didn't show up because computers are unable to without human oversight, but tell me why I'm wrong. | | |
| ▲ | p-e-w 2 days ago | parent [-] | | Apparently human chess grandmasters also need “oversight” from airplanes, because without those, essentially none of them would show up at elite tournaments. | | |
| ▲ | 9rx 2 days ago | parent [-] | | Things like trains, boats, and cars exist. Human chess grandmasters can show up to elite tournaments, and perform while there, without airplanes. Computer chess systems, on the other hand, cannot do anything without human oversight. | | |
| ▲ | ben_w a day ago | parent | next [-] | | > Things like trains, boats, and cars exist. Human chess grandmasters can show up to elite tournaments, and perform while there, without airplanes. Those modes of transport are all equivalent to planes for the point being made. I (not that I'm even as good as "mediocre" at chess) cannot legally get from my current location to the USA without some other human being involved. This is because I'm not an American and would need my entry to be OKed by the humans managing the border. I also doubt that I would be able to construct a vessel capable of crossing the Atlantic safely, possibly not even a small river. I don't even know enough to enumerate how hard that would be, would need help making a list. Even if knew all that I needed to, it would be much harder to do it from raw materials rather than buying pre-cut timber, steel, cloth (for a sail), etc. Even if I did it that way, I can't generate cloth fibres and wood from by body like plants do. Even if I did extrude and secrete raw materials, plants photosynthesise and I eat, living things don't spontaneously generate these products from their souls. For arguments like this, consider the AI like you consider treat Stephen Hawking: lack of motor skills aren't relevant to the rest of what they can do. When AI gets good enough to control the robots needed to automate everything from mining the raw materials all the way up to making more robots to mine the raw materials, then not only are all jobs obsolete, we're also half a human lifetime away from a Dyson swarm. | | |
| ▲ | 9rx a day ago | parent [-] | | > Those modes of transport are all equivalent to planes for the point being made. The point is that even those things require oversight from humans. Everything humans do requires oversight from humans. How you missed it, nobody knows. Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything. Sad state of affairs when not even the HN crowd understands such basic concepts about computing anymore. I guess that's what happens when one comes to tech by way of "Learn to code" movements promising a good job instead of by way of having an interest in technology. | | |
| ▲ | ben_w a day ago | parent [-] | | > Everything humans do requires oversight from humans. How you missed it, nobody knows. 'cause you said: Computer chess systems, on the other hand, cannot do anything without human oversight.
The words "on the other hand" draws a contrast, suggesting that the subject of the sentence before it ("chess grandmasters") are different with regard to the task ("show up to elite tournaments"), and thus can manage without the stated limitation ("anything without human oversight").> Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything. OK, and? Nobody's claiming "today" is that day. Even Musk despite his implausible denials regarding Optimus being remote controlled isn't claiming that today is that day. The message you replied to was this: https://news.ycombinator.com/item?id=46201604 The chess-playing example there was an existing example of software beating humans in a specific domain in order to demonstrate that human oversight is not a long-term solution, you can tell by the use of the words "end state", and even then hypothetical (due to "if"), as in: If successful, the end state is full automation
There was a period where a chess AI that was in fact playing a game of chess could beat any human opponent, and yet would still lose to the combination of a human-AI team. This era has ended and now the humans just hold back the AI, we don't add anything (beyond switching it on).Furthermore, there's nothing at all that says that an insufficiently competent AI won't wipe us out: And as we can already observe, there's clearly nothing stopping real humans from using insufficiently competent AI due to some combination of being lazy and/or the vendors over-promising what can be delivered. Also, we've been in a situation where the automation we have can trigger WW3 and kill 90% of the human population despite the fact that the very same automation would be imminently destroyed along with it since the peak of the Cold War, with near-misses on both US and USSR systems. Human oversight stopped it, but like I said, we can already observe lazy humans deferring to AI, so how long will that remain true? And it doesn't even need to be that dramatic; never mind global defence stuff, just correlated risks, all the companies outsourcing all their decisions to the same models, even when the models' creators win a Nobel prize for creating them, is a description of how the Black–Scholes formula and its involvement in the 2008 financial crisis — and sure, didn't kill us all, but this is just an illustration of a failure mode rather than consequences. | | |
| ▲ | 9rx a day ago | parent [-] | | > The words "on the other hand" draws a contrast, suggesting that the subject of the sentence before it I know it can be hard for programmers stuck in a programming language mindset, especially where one learned about software from "Learn to code" movements, but as this is natural language, technically it only draws what I intended for it to draw. If you wish to interpret it another way, cool. Much like as in told in the Carly Simon song of similar nature, it makes no difference to me. |
|
|
| |
| ▲ | saberience a day ago | parent | prev | next [-] | | What planet are you on? What relevance does this have at all? Computers don't need to go and fly somewhere, they can just be accessed over a network. Also, the location and traveling is irrelevant to the main point, that is, that computers far exceeded our capacity in Chess and Go many years ago and are now so much better we cannot even really understand their moves or why they do them and have no hope to ever compete. The same will be true of every other intellectual discipline with time. It's already happening with maths and science and coding. | | |
| ▲ | 9rx a day ago | parent [-] | | > What planet are you on? The one where computers don't magically run all by themselves. It's amazing how out of touch HN has become with technology. Thinking that you can throw something up into the cloud, or whatever was imagined, needing no human oversight to operate it... Unfortunately, that's not how things work in this world. "The cloud" isn't heaven, despite religious imagery suggesting otherwise. It requires legions of people to make it work. This is the outcome of that whole "Learn to code" movement from a number of years ago, I suppose. Everyone thinks they're an expert in everything when they reach the mastery of being able to write a "Hello, World" program in their bedroom. But do tell us what planet you are on as it sounds wonderful. | | |
| ▲ | vlovich123 a day ago | parent [-] | | The amount of people it takes to maintain a server rack is minimal and low cost labor. Most of the money is spent on hardware and paying people to right software for that hardware. Writing that software is becoming automated and it’s not hard to imagine that buying will as well. So you’re left with the equivalent of a plumber running your data center based on what automated systems flag as issues and other automated systems explain you the troubleshooting to go do. There might be a specialist they fly in for an insane rate (in the shorter term) if none of that works but we’re talking about a drastic reduction in workforce needed, and this is for the data center maintenance which not many companies have anymore since the cloud migration | | |
| ▲ | 9rx a day ago | parent [-] | | > The amount of people it takes to maintain a server rack is minimal and low cost labor. So what you're saying is that it requires human oversight. Got it. Glad you finally caught up to where the rest of us were many comments ago. But why did it take so long? Inquiring minds want to know. | | |
| ▲ | vlovich123 19 hours ago | parent [-] | | Once again missing the forest for the trees about what the article is about. But it’s ok - reading comprehension isn’t for everybody. | | |
| ▲ | 9rx 8 hours ago | parent [-] | | The... article? There is nothing in this subthread about an article. It is an extension on what was asserted in a segment of an earlier comment. At least the source of your confusion is now clear. I suggest you stop doing that stupid HN thing where you read individual comments in complete isolation. This isn't programming where an individual pure function is able to hold significance all on its own. Context setup throughout the thread evolution is necessary to take in. But you do you. We enjoy the hilarity found in watching the stumbling and grasping, so it's all good either way. |
|
|
|
|
| |
| ▲ | fsflover 2 days ago | parent | prev [-] | | Yes, a computer chess system replacing a thousand chess players requires a couple of developers for the oversight. | | |
| ▲ | 9rx 2 days ago | parent [-] | | Computer chess systems don't need developer oversight. They do, however, require oversight from, let's call them, IT people. |
|
|
|
| |
| ▲ | baq 2 days ago | parent | prev | next [-] | | Humans still play chess and horses are still around as a species. (Disclaimer: this is me trying to be optimistic in a very grim and depressing situation) | | |
| ▲ | plufz 2 days ago | parent | next [-] | | I try to be optimistic as well. But obviously horses are almost exclusively a hobby today. The work horse is gone. I think the problem is political to a part, if we manage to spread the wealth AI can create we are fine. If we let it concentrate power even more it looks very grim. | |
| ▲ | skissane 2 days ago | parent | prev | next [-] | | B2C businesses need consumers. If AIs take all the jobs, then most of the population-minus the small minority who are independently wealthy and can live off their investments-go broke, and can’t afford to buy anything any more. Then all the B2C businesses go broke. Then all the B2B businesses lose all their B2C business customers and go broke. Then the stock market crashes and the independently wealthy lose all their investments and go broke. Then nobody can afford to pay the AI power bills any more, so the AIs get turned off. And that’s why across-the-board AI-induced job losses aren’t going to happen-nobody wants the economic house of cards to collapse. Corporate leaders aren’t stupid enough to blow everything up because they don’t want to be blown up in the process. And if they actually are stupid enough, politicians will intervene with human protectionism measures like regulations mandating humans in the loop of major business processes. The horse comparison ultimately doesn’t work because horses don’t vote. | | |
| ▲ | 9rx 2 days ago | parent | next [-] | | > B2C businesses need consumers Businesses need consumers when those consumers are necessary to provide something in return (e.g. labor). If I want beef and only have grass, my grass business needs people with cattle wanting my grass so that we can trade grass for beef, certainly. But if technology can provide me beef (and anything else I desire) without involving any other people, I don't need a business anymore. Businesses is just a tool to facilitate trade. No need for trade, no need for business. | |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | baq 2 days ago | parent | prev | next [-] | | This is the optimistic take, too. There are plenty of countries which don’t care about votes, indeed there are dictators that don’t care about their subjects, they only care about outcomes for themselves. The economic argument only works in capitalism and rule of law - and that’s assuming money is worth anything anymore. | | |
| ▲ | skissane 2 days ago | parent [-] | | The Chinese Communist Party is obsessed with social stability. Do you think they’ll allow AI to take all the jobs, destroying China’s domestic economy in the process? Or will they enact human protectionism regulations? What Would Xi Jinping Do? | | |
| ▲ | ben_w a day ago | parent | next [-] | | > Do you think they’ll allow AI to take all the jobs, destroying China’s domestic economy in the process? If AI can take all the jobs (IMO at least a decade away for the robotics, and that's a minimum not a best-guess), the economy hasn't been destroyed, it's just doing whatever mega-projects the owners (presumably in this case the Chinese government) want it to do. That can be all the social stability stuff they want. Which may be anything from "none at all" to whatever the Chinese equivalent is of the American traditional family in a big detached house with a white picket fence, everyone going to the local church every Sunday, people supporting whichever sports teams they prefer, etc. I don't know Chinese culture at all (well, not beyond OSP and their e.g. retelling of Journey to the West), so I don't know what their equivalents to any of those things would be. | |
| ▲ | tstrimple a day ago | parent | prev [-] | | Look at what China does to protect its citizens against social media. You see China enacting many of the social media protections that many HN enthusiasts demand, yet Sinophobia makes them reframe it as a negative. "Children shouldn't have access to social media, except when China does it then it's bad!" |
|
| |
| ▲ | jacquesm 2 days ago | parent | prev | next [-] | | The independently wealthy still need the economies of scale provided by a normal society. | |
| ▲ | myth_drannon 2 days ago | parent | prev [-] | | Can the process be similar to a sudden collapse of USSR's economic system? The leaders weren't stupid and tried to keep it afloat but with underlying systemic issues everything just cratered. Can the process be modelled using game theory where the actors are greedy corporate leaders and hungry populace? | | |
| ▲ | twoodfin a day ago | parent [-] | | The USSR’s political system collapsed fairly suddenly. Its economic system had been rotten for decades. |
|
| |
| ▲ | ErroneousBosh 2 days ago | parent | prev [-] | | I am somewhat confident that horses are going to replace cars and tractors pretty soon, possibly within my lifetime and quite likely with my son's. He's going to learn how to drive (and repair) a tractor but he's also going to learn how to ride a horse. |
| |
| ▲ | ahf8Aithaex7Nai 2 days ago | parent | prev | next [-] | | Perhaps you have missed the essential point. Who drives the cars? It's not the horses, is it? And a chess computer is just as unlikely to start a game of chess on its own as a horse is to put on its harness and pull a plow across a field. I'm not entirely sure what impact all this will have on the job market, but your comparisons are flawed. | | |
| ▲ | Covenant0028 2 days ago | parent [-] | | In the case of horses and cars, you need the same number of people to drive both (exactly one per vehicle). In the case of AI and automation, the entire economic bet is that agents will be able to replace X humans with Y humans. Ideally for employers Y=0, but they'll settle for Y<<X. People seem to think this discussion is a binary where either agents replace everybody or they don't. It's not that simple. In aggregate, what's more likely to happen (if the promises of AI companies hold good) is large scale job losses and the remaining employees becoming the accountability sinks to bear the blame when the agent makes a mistake. AI doesn't have to replace everybody to cause widespread misery. | | |
| ▲ | ahf8Aithaex7Nai a day ago | parent [-] | | Yes, I understand that it's about saving on labor costs. Depending on how successful this is, it could lead to major changes in the labor market in economies where skilled workers have been doing quite well up to now. |
|
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | ForHackernews 2 days ago | parent | prev | next [-] | | Unless the state of the art has advanced, it was the case that grandmasters playing with computer assistance ("centaur chess") played better than either computers or humans alone. | |
| ▲ | ErroneousBosh 2 days ago | parent | prev [-] | | > Computers replaced humans as the best chess players Computers can't play chess. |
| |
| ▲ | impossiblefork 2 days ago | parent | prev [-] | | I think the big problem here though, is that humans go from being mandatory to being optional, and this changes the competitive landscape between employers and workers. In the past a strike mattered. With robots, it may have to go on for years to matter. | | |
| ▲ | baq 2 days ago | parent | next [-] | | A strike going long enough and becoming big enough becomes a political matter. In the limit, if politicians don't find a solution, blood gets spilled. If military and police robots are in place by that time, you can ask yourself what's the point of those unproductive human leeching freeriders at all. | | | |
| ▲ | simgt 2 days ago | parent | prev [-] | | In this scenario wages will have been driven down so much that there will be barely anyone left to buy the products made by these fully automated corps. A strike won't work, but a revolt may and is more likely to happen. | | |
|
|
|
| ▲ | gniv 2 days ago | parent | prev | next [-] |
| > most companies will still have more work to do than resources to assign to those tasks This is very important yet rarely talked about. Having worked in a well-run group on a very successful product I could see that no matter how many people would be on a project there was alway too much work. And always too many projects. I am no longer with the company but I can see some of the ideas talked about back then being launched now, many years later. For a complex product there is always more to do and AI would simply accelerate development. |
|
| ▲ | somenameforme 2 days ago | parent | prev | next [-] |
| Yip, the famous example here being John Maynard Keynes, of Keynesian economics. [1] He predicted a 15 hour work week following productivity gains that we have long since surpassed. And not only did he think we'd have a 15 hour work week, he felt that it'd be mostly voluntary - with people working that much only to give themselves a sense of purpose and accomplishment. Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself. [1] - https://en.wikipedia.org/wiki/Keynesian_economics |
| |
| ▲ | schmichael 2 days ago | parent | next [-] | | Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours. At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology. Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all. | | |
| ▲ | bloppe 2 days ago | parent | next [-] | | Margins require a competitive edge. If productivity gains are spread throughout a competitive industry, margins will not get bigger; prices will go down. | | |
| ▲ | LPisGood 2 days ago | parent [-] | | That feels optimistic. This kind of naive free market ideology seems to rarely manifest in lower prices. | | |
| ▲ | degamad a day ago | parent | next [-] | | That's because free markets don't always result in competitive industries. | |
| ▲ | bloppe a day ago | parent | prev | next [-] | | Then maybe you've never worked in a competitive industry. I have. Margins were very small. | | |
| ▲ | LPisGood a day ago | parent [-] | | I’ve certainly spent time in the marketplace buying or not buying products. |
| |
| ▲ | HDThoreaun a day ago | parent | prev [-] | | Every competitive industry has tiny margins. High margin business exists because of lack of competition. | | |
| ▲ | LPisGood a day ago | parent [-] | | I think there are plenty of counter examples. | | |
| ▲ | HDThoreaun 17 hours ago | parent [-] | | Every rule has exceptions. Usually its because of some quirk of the market. The most obvious example is adtech, which is able to sustain massive margins because the consumers get the product for free so see no reason to switch and the advertisers are forced to follow the consumers. Tech in general has high margins but I expect them to fall as the offerings mature. Companies will always try to lock in their users like aws/oracle do but thats just a sign of an uncompetitive market imo. |
|
|
|
| |
| ▲ | anon7000 2 days ago | parent | prev | next [-] | | > Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours I mean, that basically just sums up how capitalism works. Profit growth is literally (even legally!) the only thing a company can care about. Everything else, like product quality, pays service to that goal. | | |
| ▲ | timClicks 2 days ago | parent | next [-] | | Sorry if this is somewhat pedantic, but I believe that only US companies (and possibly only Delaware corporations?) are bound by the requirement to maximize shareholder value and then only by case law rather than statue. Other jurisdictions allow the directors more discretion, or place more weight on the company's constitution/charter. | |
| ▲ | tirant 2 days ago | parent | prev [-] | | That’s not a good summary of capitalism at all because you omit the part where interests of sellers and buyers align. Which is precisely what has made capitalism successful. Profit growth is based primarily on offering the product that best matches the consumer wish at the lowest price, and production cost possible. That benefits both the buyer and the seller. If the buyer does not care about product quality, then you will not have any company producing quality products. The market is just a reflection of that dinámica. And in the real world we can easily observed that: Many market niches are dominated by quality products (outdoor and safety gear, professional and industrial tools…) while others tend to be dominated by non-quality (low end fashion, toys). And that result is not imposed by profit growth but by the average consumer preference. You can of course disagree with those consumer preferences and don’t buy low quality products, that’s why you most probably also find high products in any market niche. But you cannot blame companies for that. What they sell is just the result of the aggregated buyers preferences and the result of free market decisions. | | |
| |
| ▲ | goatlover 2 days ago | parent | prev [-] | | Failure of politics and the media then. Majority of voters have been fooled into voting against their economic interests. |
| |
| ▲ | thedailymail 2 days ago | parent | prev | next [-] | | In the same essay ("Economic Possibilities for our Grandchildren," 1930) where he predicted the 15-hour workweek, Keynes wrote about how future generations would view the hoarding of money for money's sake as criminally insane. "There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard." | | |
| ▲ | somenameforme 2 days ago | parent | next [-] | | A study [1] I was looking at recently was extremely informative. It's a poll from UCLA given to incoming classes that they've been carrying out since the 60s. In 1967 86% of student felt it was "essential" or "very important" to "[develop] a meaningful philosophy of life", while only 42% felt the same of "being very well off financially." By 2015 those values had essentially flipped, with only 47% viewing a life philosophy as very important, and 82% viewing being financially well off as very important. It's rather unfortunate it only began in 1967, because I think we would see an even more extreme flip if we were able to just go back a decade or two more, and back towards Keynes' time. As productivity and wealth accumulation increased, society seems to have trended in the exact opposite direction he predicted. Or at least there's a contemporary paradox. Because I think many, if not most, younger people hold wealth accumulation with some degree of disdain yet also seek to do the exact same themselves. In any case, in a society where wealth is seen as literally the most important aspect in life, it's not difficult to predict what follows. [1] - https://www.heri.ucla.edu/monographs/50YearTrendsMonograph20... | | |
| ▲ | odo1242 2 days ago | parent | next [-] | | Well, keep in mind students at UCLA at 1967 were probably among the most wealthy in the country. A lot more average people at UCLA nowadays. Of course being financially well off wouldn't be the most important thing if you were already financially well off. | | |
| ▲ | somenameforme 2 days ago | parent [-] | | Interesting question that the study can also answer, because it also asked about parental income! --- 1966 Median Household Income = $7400 [1]. 51% of students in the $0-9999 bracket Largest chunk of students (33%) in $6k-$9999 bracket. Percent of students from families earning at least 2x median = 23% --- 2015 Median Household Income = $57k [1]. 65% of students came from families earning more than $60k. Largest chunk of students (18%) in $100k-$150k bracket. Percent of students from families earning at least 2x median = 44% --- So I think it's fairly safe to say that the average student at UCLA today comes from a significantly wealthier family than in 1966. [1] - https://www.census.gov/library/publications/1967/demo/p60-05... [2] - https://fred.stlouisfed.org/series/MEHOINUSA646N | | |
| |
| ▲ | tirant 2 days ago | parent | prev | next [-] | | I wonder what would be the proportion of answers between different society economic levels. What we know so far though is that many of the traditional values were bound to the old society structures, based on the traditional family. The advent of the sexual revolution, brought by the contraception pill, completely obliterated those structures, changing the family paradigm since then. Only accentuated in the last decade by social media and the change in the sexual marketplace due to dating apps. Probably today many young people would just prioritize reputation (eg followers) over wealth and life philosophy. As that seems to be the trend that dictates the sexual marketplace dinámics. | |
| ▲ | imtringued a day ago | parent | prev [-] | | The paradox is that the general principles of the market work, but the market is invisibly dysfunctional in its details. It is generally true that higher income jobs are allocated to higher productivity workers, but it does not follow that high incomes imply high productivity and vice versa for low incomes. If you combine the above with a disequilibrium market where supply of labor exceeds the demand for labor, then from a naive perspective it would appear as if the unemployed would deserve their unemployment. After all, the most productive members are all employed and rewarded for their efforts. The unemployed are just lazy (voluntarily unemployed) and incompetent (society is better off without them). Any form of punishment is seen as justified and not some structural failing of the system. The problem is that if there is a labor market disequilibrium, there will always be unemployed people and even if you think the productivity ranking is a good thing, it just means that if one of the "lazy" people suddenly becomes "hard working", they will just take the place of someone else and nothing has changed other than that the standard for laziness has risen. Even if people notice that the system is fundamentally broken, they realize that individually, they are either a beneficiary of the system and therefore don't see a reason to change it or they don't have the ability to change the system and rather focus on taking someone else's place. This will result in an artificial Darwinian rat race where people see each other as competitors to defeat. This is my explanation for why immigrants make a good scapegoat even though immigration doesn't affect the rules of the game at all. Here is an analogy via a game of musical chairs. There is the perception that more immigrants means more players competing for chairs. This is a naive interpretation that looks obvious. What is being forgotten is that each player is bringing a new chair and the number of missing chairs is a percentage of the number of players. The truth is that having more immigrants means you can take their chair away for yourself. So immigration is not causative here. The problem is that there were never enough chairs to begin with no matter how many people are playing the game. |
| |
| ▲ | refactor_master 2 days ago | parent | prev [-] | | > We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years Still haven’t gotten rid of work for work’s sake being a virtue, which explains everything else. Welfare? You don’t “deserve” it. Until we solve this problem, we’re not or less heading straight for feudalism. |
| |
| ▲ | azan_ a day ago | parent | prev | next [-] | | > We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. Didn’t we also get standards of living much higher than he would ever imagine? I think blaming everything on billionaires is really misguided and shallow. | | |
| ▲ | somenameforme 16 hours ago | parent [-] | | It depends on how you value things. I'd prefer to have a surplus of time and a scarcity of gizmos, rather than a surplus of gizmos and a scarcity of time. Obviously basic needs being met is very important, but we've moved way beyond that as a goal, while somehow also kind of simultaneously missing it. |
| |
| ▲ | machomaster 2 days ago | parent | prev [-] | | > We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. We just instead started doing Bullshit Jobs. https://en.wikipedia.org/wiki/Bullshit_Jobs |
|
|
| ▲ | hn_throwaway_99 2 days ago | parent | prev | next [-] |
| I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once." I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized. Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once". |
| |
| ▲ | pcrh 2 days ago | parent [-] | | For what it's worth, the decline in use of horses was much slower than you might expect. The model T Ford motor car reached peak production in 1925 [0], and for an inexact comparison (I couldn't find numbers for the US) the horse population of France started to decline in 1935, but didn't drop below 80% of its historical peak until the late 1940's down to 10% of its peak by the 1970's [1]. [0] https://en.wikipedia.org/wiki/Ford_Model_T#Mass_production [1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7023172/ | | |
| ▲ | hn_throwaway_99 2 days ago | parent | next [-] | | > For what it's worth, the decline in use of horses was much slower than you might expect. Not really, given that the article goes into detail about this in the first paragraph, with US data and graphs: "Then, between 1930 and 1950, 90% of the horses in the US disappeared." | | |
| ▲ | pcrh 2 days ago | parent [-] | | Eyeballing the chart in the OP and the French data shows them to have a comparable pattern. While OP's data is horses per person, and the French is total number of horses, both show a decline in horse numbers starting about 10 years after widespread adoption of the motor vehicle and falling to 50% of their peak in the mid- to late-1950's, with the French data being perhaps a bit over 5 years delayed compared to the US data. That is, it took 25 to 30 years after mass production of automobiles was started by Ford for 50% of "horsepower" to be replaced. The point isn't to claim that motor vehicles did not replace horses, they obviously did, but that the replacement was less "sudden" than claimed. | | |
| ▲ | hn_throwaway_99 17 hours ago | parent [-] | | > That is, it took 25 to 30 years after mass production of automobiles was started by Ford for 50% of "horsepower" to be replaced I just googled "average horse lifespan", and the answer that came back was, exactly, "25-30 years". There's a clue in that number for you. | | |
| ▲ | pcrh 17 hours ago | parent [-] | | Yes, I considered that. Someone using a horse-drawn wagon to deliver goods about town would likely not consider buying a truck until the cart horse needed replacing. The working life of a horse may be shorter than the realistic lifespan. Searching for "horse depreciation" gives 7 years for a horse under age 12, the prime years for a horse being between 7 and 12 yrs old, depending on what it is used for. I'm willing to accept the input of someone more knowledgeable about working horses, though! |
|
|
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | throw9384940 2 days ago | parent | prev [-] | | Frech eat horse meat. Cattle is still present in US... |
|
|
|
| ▲ | Den_VR 2 days ago | parent | prev | next [-] |
| If there’s more work than resources, then is that low value work or is there a reason the business is unable to increase resources? AI as a race to the bottom may be productive but not sure it will be societally good. |
| |
| ▲ | twodave a day ago | parent [-] | | Not low-value or it just wouldn't be on the board. Lower value? Maybe, but there are many, many reasons things get pushed down the backlog. As many reasons as there are kinds of companies. Most people don't work at one of the big tech companies where work priorities and business value are so stratified. There are businesses that experience seasonality, so many of the R&D activities get put on the backburner until the busy season is over. There are businesses that have high correctness standards, where bigger changes require more scrutiny, are harder to fit into a sprint, and end up getting passed over for smaller tasks. And some businesses just require a lot of contextual knowledge. I wouldn't trust an AI to do a payroll calculation or tabulate votes, for instance, any more than I would trust a brand new employee to dive into the deep end on those tasks. |
|
|
| ▲ | retinaros 2 days ago | parent | prev | next [-] |
| Most of corporate people dont provide direct value… |
|
| ▲ | ben_w a day ago | parent | prev | next [-] |
| > 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value. They have more work to do until they don't. The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated. We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating. As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did. And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand. Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests. Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm > 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job. This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do. Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric. Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand". But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits. |
| |
| ▲ | twodave a day ago | parent [-] | | I agree mostly, though personally I expect LLMs to basically give me whitewashing. They don't innovate. They don't push back enough or take a step back to reset the conversation. They can't even remember something I told them not to do 2 messages ago unless I twist their arm. This is what they are, as a technology. They'll get better. I think there's some impact associated with this, but it's not a doomsday scenario like people are pretending. We are talking about trying to build a thing we don't even truly understand ourselves. It reminds me of That Hideous Strength where the scientists are trying to imitate life by pumping blood into the post-guillotine head of a famous scientist. Like, we can make LLMs do things where we point and say, "See! It's alive!" But in the end people are still pulling all the strings, and there's no evidence that this is going to change. | | |
| ▲ | ben_w 4 hours ago | parent [-] | | Yup, I think that's fair. I'm not sure how many humans know how to be genuinely innovative; nor if it's learnable; and also, assuming that it is learnable, whether or not known ML is sample-efficient enough learn that skill from however many examples currently exist. As you say, we don't understand what we're trying to build. It's remarkable how far we got without understanding what we build: for all that "cargo cult" is seen as a negative in the 20th century onwards, we didn't understand chemistry for thousands of years but still managed cement, getting metals from ores, explosives, etc. Then we did figure out chemistry and one of the Nobel prizes in it led to both chemical weapons and cheap fertiliser. We're all over the place. |
|
|
|
| ▲ | MLgulabio 2 days ago | parent | prev [-] |
| [dead] |