| ▲ | ible 2 days ago |
| People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other. The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly. If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides. |
|
| ▲ | marcus_holmes 2 days ago | parent | next [-] |
| I'm optimistic. Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs. There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it". You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better. |
| |
| ▲ | reeredfdfdf 2 days ago | parent | next [-] | | "I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better." I'm not sure most of those organizations will have many customers left, if every white collar admin job has been automated away, and all those people are sitting unemployed with whatever little income their country's social safety net provides. Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living. | | |
| ▲ | marcus_holmes 19 hours ago | parent | next [-] | | > Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living. Yes, that's what happens. All those people find other jobs, do other work, and that new work is usually much less boring than the old work, because boring work is easier to automate. Historically, economies have changed and grown because of automation, but not collapsed. | |
| ▲ | nopinsight 2 days ago | parent | prev [-] | | AI agents might be able to automate 80% of certain jobs in a few years but that would make the remaining 20% far more valuable. The challenge is to help people rapidly retrain for new roles. Humans will continue to have certain desires far outstripping the supply we have for a long time to come. We still don’t have cures for all diseases, personal robot chefs & maids, and an ideal house for everyone, for example. Not all have the time to socialize as much as they wish with their family and friends. There will continue to be work for humans as long as humans provide value & deep connections beyond what automation can. The jobs could themselves become more desirable with machines automating the boring and dangerous parts, leaving humans to form deeper connections and be creatively human. The transition period can be painful. There should be sufficient preparation and support to minimize the suffering. Workers will need to have access to affordable and effective methods to retrain for new roles that will emerge. “soft” skills such as empathetic communication and tact could surge in value. | | |
| ▲ | Covenant0028 a day ago | parent [-] | | > The jobs could themselves become more desirable with machines automating the boring and dangerous parts Or, as Cory Doctorow argues, the machines could become tools to extract "efficiency" by helping the employer make their workers lives miserable. An example of this is Amazon and the way it treats its drivers and warehouse workers. | | |
| ▲ | nopinsight a day ago | parent [-] | | That depends on the social contract we collectively decide (in a democracy at least). Many possibilities will emerge and people need to be aware and adapt much faster than most times in history. |
|
|
| |
| ▲ | visarga 2 days ago | parent | prev | next [-] | | An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies. > There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. Managing risks, can't automate it. Every project and task needs a responsibility sink. | | |
| ▲ | ipython 2 days ago | parent | next [-] | | You can bound risk on ai agents just like an atm. You just can’t rely upon the ai itself to enforce those limits, of course. You need to place limits outside the ai’s reach. But this is already documented best practice. The point about ai not having “skin” (I assume “skin in the game”) is well taken. I say often that “if you’ve assigned an ai agent the ‘a’ in a raci matrix, you’re doing it wrong”. Very important lesson that some company will learn publicly soon enough. | |
| ▲ | marcus_holmes 2 days ago | parent | prev [-] | | > Every project and task needs a responsibility sink. I don't disagree, though I'd put it more as "machines cannot take responsibility for decisions, so machines must not have authority to make decisions". But we've all been in meetings where there are too many people in the room, and only one person's opinion really counts. Replacing those other people with an LLM capable of acting on the decision would be a net positive for everyone involved. |
| |
| ▲ | sotix a day ago | parent | prev | next [-] | | > Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive.
>
> And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs. I don't mean to pick on your example too much. However, when I worked in financial audit, reviewing journal entries spit out from SAP was mind numbingly boring. I loved doing double-entry bookkeeping in my college courses. Modern public accounting is much, much more boring and worse work than it was before. Balancing entries is enjoyable to me. Interacting with the terrible software tools is horrific. I guess people that would have done accounting are doing other, hopefully more interesting jobs in the sense that absolute numbers of US accountants is on a large decline due to the low pay and the highly boring work. I myself am certainly one of them as a software engineer career switcher. But the actual work for a modern accountant has not been improved in terms of interesting tasks to do. It's also become the email + meetings + spreadsheet that you mentioned because there wasn't much else for it to evolve into. | | |
| ▲ | marcus_holmes 19 hours ago | parent [-] | | I did qualify it with "most people" because of people like you who enjoy that kind of work :). I would hate that work, but luckily we have all sorts of different people in the world who enjoy different things. I hope you find something that you really enjoy doing. |
| |
| ▲ | mrwrong a day ago | parent | prev | next [-] | | > There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. it's interesting how it's never your job that will be automated away in this fantasy, it's always someone else's. | | |
| ▲ | marcus_holmes 19 hours ago | parent [-] | | I have absolutely had that job, and it sucked. I also worked as a farm hand, a warehouse picker, a construction site labourer, and a checkout clerk. Most of that work is either already automated or about to be, thankfully. |
| |
| ▲ | gverrilla a day ago | parent | prev [-] | | "benefits" = shareholder profits ++ |
|
|
| ▲ | jondwillis 2 days ago | parent | prev | next [-] |
| Workshopping this tortured metaphor: AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood. The owners of the tech need to reinvest in the hosts. |
| |
| ▲ | hephaes7us 2 days ago | parent | next [-] | | Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form. | | |
| ▲ | kfarr 2 days ago | parent [-] | | Yeah I feel like the real ah ha moment is still coming once there is a GPT-like thing that has been trained on reality, not its shadow. | | |
| ▲ | chongli 2 days ago | parent | next [-] | | Yes and reality is the hard part. Moravec’s Paradox [1] continues to ring true. A billion years of evolution went into our training to be able to cope with the complexity of reality. Our language is a blink of an eye compared to that. [1] https://en.wikipedia.org/wiki/Moravec's_paradox | |
| ▲ | baq 2 days ago | parent | prev | next [-] | | Reality cannot be perceived. A crisp shadow is all you can hope for. The problem for me is the point of the economy in the limit where robots are better, faster and cheaper than any human at any job. If the robots don’t decide we’re worth keeping around we might end up worse than horses. | | |
| ▲ | agos a day ago | parent [-] | | but that crisp shadow is exactly what we call perception |
| |
| ▲ | qsera 2 days ago | parent | prev [-] | | Look I think that is the whole difficulty. In reality, doing the wrong thing results in pain, and the right thing in relief/pleasure. A living thing will learn from that. But machines can experience neither pain nor pleasure. |
|
| |
| ▲ | visarga 2 days ago | parent | prev | next [-] | | > What happens when there are no more hosts to donate more training-blood? LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world. | |
| ▲ | scotty79 2 days ago | parent | prev [-] | | There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels. |
|
|
| ▲ | ghssds 2 days ago | parent | prev | next [-] |
| People are animals. |
| |
| ▲ | 2 days ago | parent | next [-] | | [deleted] | |
| ▲ | goatlover 2 days ago | parent | prev [-] | | When horses develop technology and create all sorts of jobs for themselves, this will be a good metaphor. | | |
|
|
| ▲ | traverseda 2 days ago | parent | prev | next [-] |
| I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs. |
| |
| ▲ | jordwest 2 days ago | parent | next [-] | | Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us. But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless. I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers. Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead. | | |
| ▲ | xeonmc 2 days ago | parent | next [-] | | Charlie Chaplin's speech is more relevant now than ever before: https://www.youtube.com/watch?v=J7GY1Xg6X20 | | |
| ▲ | jordwest 2 days ago | parent [-] | | I first saw this about 15 years ago and it had a profound impact on me. It's stuck with me ever since "Don't give yourselves to these unnatural men, machine men, with machine minds and machine hearts. You are not machines, you are not cattle, you are men. You have the love of humanity in your hearts." Spoken 85 years ago and even more relevant today |
| |
| ▲ | vkou 2 days ago | parent | prev [-] | | The thing that the ultra-wealthy desire above all else is power and privilege, and they won't be getting either of that in those bunkers. They sure as shit won't be content to leave the rest of us alone. | | |
| ▲ | jordwest 2 days ago | parent [-] | | Yeah I know it's an unrealistic ideal but it's fun to think about. That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death. | | |
| ▲ | faidit 2 days ago | parent [-] | | Going off your earlier comment, what if instead of a revolution, the oligarchs just get hooked up to a simulation where they can pretend to rule over the rest of humanity forever? Or what if this already happened and we're just the peasants in the simulation | | |
| ▲ | jordwest 2 days ago | parent | next [-] | | I like this future, the Meta-verse has found its target market | |
| ▲ | _DeadFred_ a day ago | parent | prev [-] | | This would make a good black mirror episode. The character lives in a total dystopian world making f'd up moral choice. Their choices make the world worse. It seems nightmarish to us the viewer. Then towards then end they pull back, they unplug and are living in a utopia. They grab a snack, are greeted by people that love and care about them, then they plug back in and go back to being their dystopian tech bro ideal self in their dream/ideal world. |
|
|
|
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | visarga 2 days ago | parent | prev [-] | | > It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs. You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent. |
|
|
| ▲ | d--b 2 days ago | parent | prev [-] |
| I was trying to phrase something like this, but you said it a lot better than I ever could. I can’t help but smile at the possibility that you could be a bot. |