| ▲ | cortesoft 15 hours ago |
| I feel like their are (at least) three main critiques of AI, and I wish we could debate them separately, because I think they each have different resolutions. The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income. The other two critiques are trickier. The first is the environmental impact of AI, and the response is difficult. Doing work to make it more efficient, and continuing to develop cleaner energy sources is paramount. Taxing and efficiency requirements might be a start. We have the technology to produce energy in sustainable ways, but it is expensive. It has to be non-negotiable if massive energy usage for AI is to continue. The last is the REAL conversation, and I don’t know the answer. How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI? I guess there is another issue, related to the last one, which is how do we deal with the ability to use AI to mislead and commit fraud at scale. How do we deal with not being able to trust what actually said/done by a human and what is AI pretending to be human? How do we avoid and mitigate the ability for AI to generate a massive amount of custom content that is used to mislead and defraud people? So much of our current mitigation strategy relies on the assumption that it takes a lot of effort and time to do certain things that can now be done instantly thousands of times? |
|
| ▲ | TheScaryOne 7 hours ago | parent | next [-] |
| >The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. This was the argument about robots. It did not pan out. No taxes materialized. Robots and Automated Machines have not shared productivity. In fact, things like self-checkout has spread the labor load to the customer, instead of the company. >We have the technology to produce energy in sustainable ways, but it is expensive AI Datacenters should be completely sustainably self-powered. Full stop. We did not spend decades bringing down the cost of power only to have it all hoovered up by robber barons who "need" it to be the first immortal AI God. We did not install water treatment plants to bring down our water usage rates just to feed the machine spirit. >How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI? Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry. |
|
| ▲ | troosevelt 15 hours ago | parent | prev | next [-] |
| If you lost your $60,000 a year job due to this, do you really believe a basic income funded by it will make up that loss? It won't. Basic income in the US is usually proposed at $12k per year, which would add another $3 trillion to the budget. Do you think you can even get that just taxing these companies? I don't. People who bring up basic income need to get serious about the numbers involved because I never see it. It's not a realistic solution. |
| |
| ▲ | omikun 10 hours ago | parent | next [-] | | People complain UBI doesn’t make mathematical sense doesn’t realize our current economy doesn’t make mathematical sense either. All this prosperity we in the developed world get comes at the cost of extracting wealth from the rest of the world and all government taking on ever more debt. | | |
| ▲ | mvdtnz 7 hours ago | parent [-] | | That's an absolutely enormous claim to make with zero evidence. | | |
| ▲ | contingencies 7 hours ago | parent [-] | | The modern (social or economic) history of China, Europe, Russia, UK, US are all good case studies. In aggregate, I think they underscore the reality of the system. Every year we now have high profile people coming out of the system screaming about how insane it is: bankers, traders, politicians, military intelligence. If you had to boil it down to a single book debunking late 20th century pax Americana international macro-economics, it's hard to go past Confessions of an Economic Hitman, although not written formally. I've personally had chapter one verified by an Indonesian diplomat. Alternatively, take the quippy summary of a world-recognized capitalist, George Soros: Classical economics is based on a false analogy with Newtonian physics. |
|
| |
| ▲ | ggsp 14 hours ago | parent | prev | next [-] | | Fair warning: I’m quite ignorant in terms of economics, so this is a naïve way of looking at it. The question that always pops up for me when it comes to UBI applied to the current capitalist system: even if you did actually come up with the money somehow (which is a pretty huge if as you say), once everyone has X “base money” per month, doesn’t that mean the cost of living (specifically renting) will rise to match this new “base”? | | |
| ▲ | andriamanitra 12 hours ago | parent [-] | | The cost of living would certainly rise somewhat but the point is that UBI is redistributive: the same absolute amount to everyone raises low incomes by a larger percentage than high incomes. Long term effects are hard to predict but in the short term it would mean the poor doing slightly better while the middle class is slightly worse off. The non-working (owning) class would be mostly unaffected as assets are insulated from inflation. Another factor to consider is that putting more money in the hands of people in need of <thing> means producing <thing> becomes more profitable and thus more investment and resources are directed towards <thing>. If we assume the economy works the way the proponents of capitalism say it does, this should eventually drive the cost of living back down. But personally I think the biggest benefit of UBI would be the reduction in number of people who are desperate enough to accept work – both legal and illegal – that is unfairly compensated, inhumane and/or immoral. The existence of that class of people is the driving force behind many societal problems. Exorbitant amounts of resources are wasted treating the symptoms of those problems instead of fixing the root cause. |
| |
| ▲ | 14 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | hashmap 15 hours ago | parent | prev | next [-] | | You never see it how. Like in terms of raw resources or political will? | | |
| ▲ | troosevelt 15 hours ago | parent [-] | | I mean the numbers. 12k per year is peanuts. You cannot live off that and to do it we'd be nearly doubling the budget (that's old data, it's probably not that portion of the budget anymore). That 12k doesn't include healthcare, it doesn't include a lot of things. It's basically ensuring that people live well below poverty level, and for what? I just don't get how the numbers work, even if it was politically feasible. I'd much rather have free healthcare and other amenities other countries have. Here in the US if you lose your job there is virtually nothing between you and the streets besides family and friends. I'm facing this right now. I cannot get a job in tech which means restarting my career. Getting a job right now is not easy in any field especially not in anything like a living wage. If I did not have my parents I would be on the streets right now, thankfully I don't have a mortgage or anything like that. I'm not sure how much $12k per year would really help, it certainly wouldn't pay for housing. It's rough out there. |
| |
| ▲ | animegolem 15 hours ago | parent | prev | next [-] | | And even if you did get the 60k and never can find work again are you gonna be happy about the next door neighbor working for 120k and getting his 60k on top? | | |
| ▲ | site-packages1 15 hours ago | parent | next [-] | | Well I can tell you that I work 40+ hours a week and am very unhappy my neighbor has a more expensive house than me. Someone should do something! | |
| ▲ | abakker 14 hours ago | parent | prev | next [-] | | All the proposals I’ve seen would set the marginal tax rate on the 120 so high that his earnings would end up more like 40k from the 120k job and then he gets his 60. So, still some benefit to working, but a very progressive tax rate on higher earnings. Not sure I agree with this, but that is what I’ve seen. | | | |
| ▲ | Aurornis 14 hours ago | parent | prev [-] | | Your neighbor would get $60K UBI but their tax bill would go up by $80K because the government needs tax revenue to pay the UBI. For high levels of UBI it’s not possible to get all of the necessary tax revenue from taxing billionaires or corporations or other simplistic ideas that sound good unless you do math. |
| |
| ▲ | stale2002 14 hours ago | parent | prev | next [-] | | > do you really believe a basic income funded by it will make up that loss? It won't. Almost definitionally it would. If society is saving a bunch of money on all that saved labor, that extra value is still there, it just needs to be appropriately redistributed | |
| ▲ | bobsmooth 14 hours ago | parent | prev | next [-] | | >Do you think you can even get that just taxing these companies? If we go back to a 60% corporate tax rate, for sure. | | |
| ▲ | Aurornis 14 hours ago | parent | next [-] | | You could put a 100% tax on revenues (not profit) of AI companies and it would come out to a low couple hundred dollars per person per year right now. A 60% corporate tax rate wouldn’t get to the levels needed for UBI proposals either. | |
| ▲ | what 12 hours ago | parent | prev [-] | | They’ll just find a way to have $0 of profit. You have nothing to tax. |
| |
| ▲ | guzfip 15 hours ago | parent | prev | next [-] | | [flagged] | | |
| ▲ | happytoexplain 15 hours ago | parent | next [-] | | This is one of the most horrifying comments I've ever read on this website. It's practically a dare to engage in civil war or violent revolution. People fundamentally experience life as relative - as changes. You can't "deprogram" intrinsic human nature. You can just wait 80 years for everybody who's not used to the new hell to die. | | | |
| ▲ | troosevelt 15 hours ago | parent | prev | next [-] | | Have you lived on 12k? 24k puts you near poverty level. $1k per month will cover food expenses, it won't cover transport, shelter, and certainly not medical. On 12k per year you have enough money for food and praying that an emergency doesn't happen. It's hard enough living on 40k, and I'm not even in a place where costs are expensive. | | |
| ▲ | krapp 14 hours ago | parent | next [-] | | UBI will never happen in the US so it's a pointless argument. Americans will have plenty of pawn shops and short-term loan services to help them, though. | |
| ▲ | hackable_sand 13 hours ago | parent | prev [-] | | I'm literally doing it right now It is kinda funny to see you guys petrify at the thought of people living in poverty, pretend you care, and then use us as a political foil in your useless debates. Where's the money you owe us? | | |
| ▲ | happytoexplain 12 hours ago | parent [-] | | How is not wanting to live in poverty using the poor as a foil? How is it hypocritical/fake to care about people who are in situations that I don't want to be in? Isn't that just logical? |
|
| |
| ▲ | bobthepanda 15 hours ago | parent | prev | next [-] | | “Let them eat cake,” or whatever. Telling a bunch of people they should accept being poorer has always worked out historically. | | |
| ▲ | infamouscow 13 hours ago | parent [-] | | I've only been slightly joking about starting a company that sells rope and guillotines. |
| |
| ▲ | JumpCrisscross 15 hours ago | parent | prev | next [-] | | > $12k a year is plenty. You’ve just been raised above your natural standard I get where you're coming from. But this is politically unworkable, and for good reason. If AI increases productivity, that means more wealth, which means living standards should go up. | | |
| ▲ | AshleyGrant 14 hours ago | parent [-] | | > $12k a year is plenty. You’ve just been raised above your natural standard > I get where you're coming from. You do? Have you priced out health insurance lately? I have. Insurance on HealthCare.gov for my partner and I would be $1700/month for what amounts to catastrophic coverage. It had around a $20k deductible and covered nothing other than an annual physical prior to hitting the deductible. With $2k/month to work with between us, I guess we have to somehow find a place to live and eat on the remaining $300 as we pay for our functionally worthless health insurance since there is no way in hell we could afford to pay the deductible. | | |
| ▲ | JumpCrisscross 14 hours ago | parent [-] | | Their numbers are wrong. But their fundamental argument, I believe, is degrowth. That we are living beyond our means and need to lower our expectations of living standards to live sustainability. It's a philosophically-appealing argument. It's also wrong, unless you're comfortable with the inevitable violence and likely population destruction that would need to ensue from an honest degrowth agenda. |
|
| |
| ▲ | smeej 15 hours ago | parent | prev | next [-] | | It didn't even occur to me that this might not be sarcasm until I read the other comments. Still fighting to hold onto that assumption. | |
| ▲ | omikun 10 hours ago | parent | prev | next [-] | | You mean 12k a year with free housing and free health insurance? | |
| ▲ | 15 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | Eupolemos 15 hours ago | parent | prev | next [-] | | These years, knowing what is tongue-in-cheek can be very difficult. Many of us see the current US administration as being either real life modern nazis or heavily influenced by such. So I was wondering; are you being serious? | |
| ▲ | CodeCompost 15 hours ago | parent | prev | next [-] | | You basic income is 12k? Congratulations, your rent just went up 12k a year. | | |
| ▲ | jazz9k 14 hours ago | parent [-] | | This is the part most people don't understand or intentionally ignore. It will accelerate inflation and 12K will be worth even less than it is now. The natural progression of this is always government price fixing, which always ends up in complete destruction of the economy. |
| |
| ▲ | jazz9k 15 hours ago | parent | prev [-] | | "lifestyle expectations" $12k might be nice in parts of Asia, but when the average rent is $1200/month, it doesn't go very far anywhere in the US. |
| |
| ▲ | pydry 15 hours ago | parent | prev | next [-] | | Just as hyperloop was designed as a techbro pie in the sky notion to kill high speed rail, basic income as an idea is designed to kill more realistic attempts to shore up welfare, e.g. * A job guarantee like we had during the great depression * Lowering retirement age * Raise minimum wage * Expanding medicare to everyone It's worth remembering that if AI really can do everyone's jobs then it'll be wildly deflationary so there's no need to worry about pesky government spending on this stuff or paying people more. Spend spend spend, baby! Ah youre worried it cant do that? Maybe it is mostly smoke and mirrors then. | | |
| ▲ | WorldMaker 9 hours ago | parent | next [-] | | The historic origins of UBI are from political parties that wanted most of those same things, too, especially raising the minimum wage and expanding medicare to everyone. A strong minimum wage makes UBI more attractive. More people will want jobs in addition to UBI. UBI is also seen as a market force to naturally drive minimum wage up, because UBI offers workers more choices: more opportunities to build a startup or take a sabbatical instead of work 40 hours. The labor market has to compete with that "opportunity cost" in ways it doesn't need to care about today. It would increase liquidity in the labor market and in terms going all the way back to even Adam Smith, make the market more free. Wages would better reflect demand for the work if laborers had more choices at more times in their lives where and how much to work. Medicare for Everyone and Universal Health Care make UBI simpler. Health risk is always going to be variable and insurance-like risk pooling will always be a good idea for society to defray costs in bad years from surpluses in good ones and defray costs from unhealthy people by considering how many people are kept healthy. UBI could be designed to try cover much of health care, but it is never going to be as efficient as a pooled single payer. If a country already has Universal Health Case, the conversations about UBI get a lot simpler. It is a lot easier to sell it is a flat universal grant. Your health care can be provided by a complex risk pool and smart accountants doing a lot of smart math on your behalf. Your UBI can be just a flat number. Simpler: you can think about how you spend your UBI without having to consider your predicted health outcomes in that period of time. UBI's flat universal value can be set on benchmarks that don't need need complex amortization schedules and risk analysis. The Canadian Social Credit Party, formed to espouse UBI was one of the keys to building Canada's Universal Health Care and their priority was that first, then UBI. That still seems the best priority order to me. | |
| ▲ | fluoridation 15 hours ago | parent | prev | next [-] | | Job guarantees and higher minimum wages are just UBI with extra steps, while lowering retirement age is just conditional UBI by another name. If you're giving people more money in exchange for nothing (or nothing of any value to anyone, as in the case of a job guarantee), it's effectively indistinguishable from UBI. | | |
| ▲ | JumpCrisscross 15 hours ago | parent | next [-] | | > Job guarantees and higher minimum wages are just UBI with extra steps, while lowering retirement age is just conditional UBI by another name The extra steps reduce costs and encourage offsetting production. Those are important steps! | |
| ▲ | pydry 15 hours ago | parent | prev [-] | | "When our grandparents built the hoover dam, the lincoln tunnel and the triborough bridge with a job guarantee that was just money for nothing - UBI with extra steps." ^ this would be an accurate representation of your opinion then? | | |
| ▲ | fluoridation 14 hours ago | parent [-] | | That job guarantees exceptionally produce useful things doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth. | | |
| ▲ | JumpCrisscross 14 hours ago | parent [-] | | > doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth One could say the same thing about all the little art projects a hypothetical society on UBI might busy itself making. The pertinent difference seems to be one about scale and co-ordination. Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption. | | |
| ▲ | fluoridation 13 hours ago | parent [-] | | >Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption. Creating busywork doesn't strike me as a particularly worthwhile endeavor, compared to idleness. | | |
| ▲ | JumpCrisscross 10 hours ago | parent [-] | | > Creating busywork doesn't strike me as a particularly worthwhile endeavor Make work isn’t the same as busywork. As another comment mentioned, the Hoover Dam isn’t useless busywork. | | |
| ▲ | fluoridation 9 hours ago | parent [-] | | And as I mentioned, the Hoover dam is also not the typical example of the kinds of projects guaranteed job programs generate. |
|
|
|
|
|
| |
| ▲ | spwa4 14 hours ago | parent | prev [-] | | So the problem with 3 out of 4 of your challenges is that, right now, it means young people need to work more to achieve them. Money is an issue, but money by itself cannot solve it, it really needs to be backed with more people working. That's not going to happen, in fact, less people will work. So without AI, the path forward is obvious: those 3 will become worse. Lowering retirement age, raising minimum wage, and expanding medicare won't happen without AI. They can't. We already are reasonably close to a job guarantee. If unemployed people would accept any job, unemployment would drop by a lot. Not to zero, obviously, but a lot. Unemployment is also pretty low by historical standards, so fixing unemployment with a job guarantee can't fix much. We'll need something else. > It's worth remembering that if AI really can do everyone's jobs then it'll be hyperdeflationary so no need to worry about pesky government spending on this stuff. So yeah, I disagree. If you're going to assume AI will just jump to how capable it'll be 100 years from now, then you need to think a bit deeper. What AI effectively does, it provides capital-based labor. You buy a robot. Robot costs a lot, but operational expenses are marginal, energy and (maybe) "tokens". Add solar power, and let's say local AI becomes a thing, at least for normal robots, and you need nothing other than the initial cost of the robot. Okay, so this will mean everything can be staffed with tens of thousands of these robots. Remote mine? No problem. 500 robots in your house? Why not. Cleaning very large facilities? Not a problem. Farm hundreds of square kilometers? Fine. Dig a canal to avoid the strait of Hormuz and just do it with shovels? Let's get to it. AI can be a universal machine that can do anything labor can achieve. Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out. | | |
| ▲ | mschuster91 14 hours ago | parent [-] | | > Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out. Historically, that "we'll figure something out" has usually meant the economical wipeout of large parts of the population, sooner or later followed either by some epidemic event or other "act of god" (like fires) that was a consequence of squalor and poverty, or by some sort of war to thin out the herd. I'd prefer if history would not repeat itself for once. | | |
| ▲ | spwa4 12 hours ago | parent [-] | | > Historically, that "we'll figure something out" has usually meant the economical wipeout of ... Uh, historically everything has usually meant the economical wipeout of large parts of the population. It still means that in most third world countries. Economic power is not the huge differentiator here. |
|
|
| |
| ▲ | ip26 14 hours ago | parent | prev | next [-] | | If companies are faced with the choice between: - employ you at 60k/yr - replace you with a machine that costs a lot of money, and also send you UBI of 60k/yr It should be obvious the latter is not an option that is ever going to happen. | | |
| ▲ | xboxnolifes 13 hours ago | parent | next [-] | | What if the machine in this context is 3x as productive as you? | | |
| ▲ | hettygreen 4 hours ago | parent [-] | | Then it replaces 3 people's jobs, requiring paying 3 UBI's in this thought experiment. |
| |
| ▲ | JeremyNT 10 hours ago | parent | prev | next [-] | | The solution to the subsequent devaluation of labor, and ability for tech oligarchs to pocket the cash instead, will not be found in capitalism. Unless we are all to become serfs, a new way to distribute resources needs to be on the table. UBI is a salve, offered to keep victims of the system out of abject poverty. It is too little, too late. | | |
| ▲ | fireflash38 28 minutes ago | parent [-] | | We are returning to feudalism, with a cyberpunk spin on it. You will not own anything. You don't even really own your tech now. The writing is on the wall: you will not be allowed to modify what you own. |
| |
| ▲ | mschuster91 14 hours ago | parent | prev [-] | | The problem is, companies will go for the third route: hire a company in India to launder AI. It has already worked out once with the offshoring wave. | | |
| |
| ▲ | Lerc 14 hours ago | parent | prev [-] | | Like the post above says that there are multiple issues at play with AI. The same can be said about universal income. The pay levels are not comparable because you are also recompensed with time. You may choose to spend your time in a number of ways that you find rewarding that also reduce your expenses. Making your own meals, clothes, furniture, beer, wine etc. There are a lot of people who would enjoy doing these things but are too time poor to do so. Your expenses also reduce by the amount you must spend in order to make yourself available to work. Travel, work clothes, medical certificates when sick. You can spend a lot in order to be paid. If you want a world with a reasonable distribution of income levels. It stands to reason that those receiving more right now should receive less. Certainly, the absolute wealthiest should reduce the most, but on a global scale, it is hard to defend that those in the top 10% of incomes should retain their position. The proposal for how much a universal income should pay is a variable to be argued itself. I can certainly see it being argued for at a lower level than ultimately desired since something is better than none. In a sense the end state of a universal income in an equitable world would be remarkably simple. The income available divided by the world's population, Those reviving more than their share now may not be happy about it, but I'm not sure they have a right to their larger portion either. |
|
|
| ▲ | guyomes 2 hours ago | parent | prev | next [-] |
| > How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI? Just as a good for thought, looking back into history, during the late 1920s, mass production had a critical impact on Art Deco [1]. Artists were divided on the question if mass-produced art (using new industrial methods) could have a quality similar to hand-crafted art. It is clear that different people will have different opinion on the subject. The technology is not there yet, but one example of mass production from AI would be book adaptation into movies. I'm sure that there are many other examples hard to predict that might: empower people, degrade art quality, improve art quality, divide people or maybe gather people. [1]: https://en.wikipedia.org/wiki/Art_Deco#Late_Art_Deco |
|
| ▲ | JumpCrisscross 15 hours ago | parent | prev | next [-] |
| There is also a likeability problem. Altman and, shockingly, to a lesser degree, Musk have terrible brands. When folks see those people at the top of these companies, folks who have been publicly saying they're going to cause massive job losses and cause human extinction or whatnot, they're going to hate the companies irrespective of the actual risk of job losses or environmental impacts. |
| |
| ▲ | throwatdem12311 14 hours ago | parent | next [-] | | Why does Dario get off the hook here? He also comes off like a greasy asshole 99% of the time. | | |
| ▲ | happytoexplain 13 hours ago | parent | next [-] | | Virtually no "normal people" know who he is. I don't think most programmers I know even know who he is. They just know "Altman" and "Anthropic". | |
| ▲ | JumpCrisscross 14 hours ago | parent | prev [-] | | > Why does Dario get off the hook here? I'm curious for metrics, but Dario strikes me as being less perpetually online. Given equal time, they may each be unlikeable. But they don't put themselves out there equally–Sam and Elon are unable to focus on their work. (I'll admit I've had a soft spot for Dario since he stood up to Hegseth–maybe I'm just not seeing the equal hate he's getting.) |
| |
| ▲ | keybored 5 hours ago | parent | prev [-] | | What a blessing that they don’t have an Obama frontman for their schemes. |
|
|
| ▲ | frm88 8 hours ago | parent | prev | next [-] |
| The fourth aspect to discuss is how do we want to restrict the influence of AI companies on politics? Will we allow the CEOs to implement Thiel's vision of a world run as a company with CEOs at the top via massive monetary influence on political decision making, effectively abolishing democracy? If they really manage to replace 50% of the workforce with AI, their influence over everything from regulation to elections to social security networks as well as foreign policy will be enormous. |
|
| ▲ | Aurornis 14 hours ago | parent | prev | next [-] |
| > The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income Every call for UBI should be qualified with two estimates: 1) How much money you think UBI will pay out 2) How much money you think the tax will generate Creating a UBI program with AI taxes sounds like a clean solution to something until you do any math. If we estimate today’s AI revenues across all the big providers at $100B annually (a little high) and divide by the population of the US, I get around $24 per month per person. So a 100% tax on AI plans would allow us to give UBI of about 80 cents per day. Even 10X the revenues wouldn’t make bring that to parity with UBI expectations. A 100% tax would also be an incredible gift to foreign AI companies that could offer similar services for half the price to everyone else in the world. |
| |
| ▲ | cortesoft 14 hours ago | parent [-] | | This is based on the assumption that AI is going to take all our jobs. If this is true, than as more jobs are absorbed by AI the revenue would increase. | | |
| ▲ | Aurornis 14 hours ago | parent [-] | | You’re assuming that this AI will be in the same taxable jurisdiction as the people whose jobs were replaced. The work that is most replaceable by AI is work that is mostly digital. That work most easily moves to another country. When the work is replaced by AI you can relocate it to another country much more easily than when you have to relocate workers. |
|
|
|
| ▲ | TrevorFSmith 13 hours ago | parent | prev | next [-] |
| I think you're missing one of the major reasons people are against "AI": the jerks at the top. When obviously nefarious people are lining their pockets and not bothering to even pretend to care about the people around them, it's no surprise they're hated. |
|
| ▲ | zenethian 4 hours ago | parent | prev | next [-] |
| You forgot the part where they stole literally everyone’s copyrighted works and trained on it and have not been sanctioned at all for it. |
|
| ▲ | pj_mukh 14 hours ago | parent | prev | next [-] |
| I don't think the last two critiques are good critiques at all. The environmental impact is a function of our energy sources not energy uses. Complaining about energy and water when we have infinite energy beamed down to us surrounded by a planet that is 70% water seems silly. And AI "Ikea-fies" art and creativity. It doesn't get rid of it. Of course you can get a generic table from IKEA, but for a real unique piece, you need to go to a real artist. Always. The real main critique is for AI jobs that are a one-to-one replacement, your taxi driver, your dock worker etc. I don't think UBI is a viable solution (I used to) but nothing replaces the community and status that a real job gives you. This is going to be a tough one. |
|
| ▲ | ashley95 14 hours ago | parent | prev | next [-] |
| > The first is the fear of job loss, and I feel like this is the most straightforward to deal with. In the same way that it was straightforward to deal with job loss from the industrial revolution, or when the US shipped away all its manufacturing capability? |
| |
| ▲ | cortesoft 13 hours ago | parent [-] | | I mean, kind of? It was fairly straight forward, and unemployment and poverty continued to decrease as those events occurred. |
|
|
| ▲ | zozbot234 11 hours ago | parent | prev | next [-] |
| The main critique of AI is that it's a dumb hallucinating parrot. It can't do genuine human quality work at all, outside of extremely narrow domains like basic translation and copyediting. Even for Q&A, while it can be useful by quickly accessing a huge storehouse of learned knowledge, the vulnerability to hallucinations means that human expert verification will always be required. |
| |
| ▲ | card_zero 6 hours ago | parent [-] | | I'll note that there can be multiple main critiques coming from an incoherent set of viewpoints, since this is public opinion we're talking about. Between "AI doing creative work", if you believe, and "fraud", there's all the low-key filler material that's sub-creative and sub-fraudulent. There's a similarity between the phrase it was made with AI and phrases like I didn't bake your cake myself, it came from a store or sorry, it's just a cheap plastic one. So part of AI's image is that it's a flourishing new source of disappointment. |
|
|
| ▲ | oytis 14 hours ago | parent | prev | next [-] |
| Universal basic income is not an adequate replacement for a good career. Universal unconditional prosperity might be one, but it's not clear whether AI can really do that. |
|
| ▲ | foogazi 14 hours ago | parent | prev | next [-] |
| > The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. How much UBI you want from this AI tax ? I don’t think they’d give me what I want |
|
| ▲ | ambicapter 15 hours ago | parent | prev | next [-] |
| > Doing work to make it more efficient Making it more efficient will probably >>increase<< the total energy devoted to AI, not reduce it. See Jevon's Paradox. |
|
| ▲ | operatingthetan 14 hours ago | parent | prev | next [-] |
| I think you may be going too far, as in your critiques assume the tech is further along than it actually is. There are three fundamental problems for mass AI adoption/AGI: 1. Lack of memory/continuity 2. Lack of agency 3. Lack of self-awareness Based on my understanding of the basic 'loop' of an LLM, solutions for these may be decades off or not possible. Which leads me to the fourth problem: 4. Lack of compute To get anywhere near AGI we need massive context windows. The whole thing is a mess. |
| |
| ▲ | neonstatic 13 hours ago | parent | next [-] | | I think people really confuse their imagination and expectations with reality. There's so much talk about AGI and mass layoffs. Then there is my experience. I was talking to Claude and ChatGPT, trying to fix an issue with a simple function in Rust, which is returning a boolean depending on day of week and time of day. The logic looked ok to me, but tests were failing. Notably, my real world data derived tests were succeeding, while brute-force/comprehensive tests written by Claude were failing. I wanted those "just to be sure". Both Claude and ChatGPT were spinning their wheels, introducing fixes, then undoing prior fixes, so on and so forth. They also updated tests. We were going from one failure to another, while they confidently reassured me that "this is the fix", they found the "crucial bug" etc. etc. Turned out my logic was correct from the beginning. My tests were correct. Claude's tests were broken. I realized this by writing my own brute force test. Just a simple loop with asserts and printlns to see what is failing. I did what the machine was supposed to do for me. In less than 5 minutes I fine tuned the test to actually check what it was supposed to be checking and voila. The "fast" thinking machine episode took me 2 hours and only produced frustration. Sorry I should learn to speak the language - AI reduced my development velocity :) The only poverty I see coming is from collapse of quality after these dumb machines are used to replace people, who actually know what they are doing. | | |
| ▲ | operatingthetan 10 hours ago | parent [-] | | And if the current models really are so great, why do we need to have a massive hype-train for each time the number goes up 0.1? |
| |
| ▲ | SpicyLemonZest 14 hours ago | parent | prev | next [-] | | All three of these problems are thoroughly solved by widely available tools. | | |
| ▲ | operatingthetan 14 hours ago | parent [-] | | They are? Is your LLM ready to run your organization without further input from you or anyone? Do you realize that "memory" requires eating your hilariously small context window? Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns? | | |
| ▲ | 14 hours ago | parent | next [-] | | [deleted] | |
| ▲ | SpicyLemonZest 14 hours ago | parent | prev [-] | | That seems like an unreasonably high standard. I like to think that I have memory, agency, and self awareness, but I'm not ready to run my organization without further input from anyone. > Do you realize that "memory" requires eating your hilariously small context window? I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another. | | |
| ▲ | operatingthetan 14 hours ago | parent [-] | | >memory, agency, and self awareness The LLM only currently has the illusion of these things. Hence the bubble. I know that you (or anyone) as a human being don't have the illusion of these things. This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world. Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query. |
|
|
| |
| ▲ | sumeno 14 hours ago | parent | prev [-] | | [flagged] | | |
| ▲ | operatingthetan 14 hours ago | parent [-] | | A personal attack is not necessary. You don't seem to understand my perspective at all, please read some of my other comments. |
|
|
|
| ▲ | schoen 15 hours ago | parent | prev | next [-] |
| The concern I hear the most (which I don't think is common among the general public) is the existential risk one (that an AI may be created that drastically exceeds human intelligence, and that it may accidentally be incentivized to take actions that destroy most or all of human civilization). |
| |
| ▲ | JumpCrisscross 15 hours ago | parent | next [-] | | > concern I hear the most (which I don't think is common among the general public) is the existential risk one Altman and friends' "stop us before we shoot grandma" PR tour in 2023 and '24 is largely the cause of this AI backlash. If you tell everyone you're building something that will kill us all, you will scare up investors. But you'll also turn the public against you. In truth, we have zero evidence of the alignment problem to date in the existential form. Instead, it's the usual technology enabling bad actors stuff. | | |
| ▲ | salawat 8 hours ago | parent | next [-] | | The "Alignment Problem" is already here. We just call it Corporate Governance. We happen to be failing at it massively right now. | |
| ▲ | SpicyLemonZest 14 hours ago | parent | prev [-] | | The "alignment problem" as traditionally understood assumed a different path to AI development, where the best AIs wouldn't primarily operate on a substrate of human language. If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem, and it seems no less likely today than it did in 2023. | | |
| ▲ | JumpCrisscross 14 hours ago | parent [-] | | > If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument. And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively. | | |
| ▲ | SpicyLemonZest 13 hours ago | parent [-] | | I don't understand how you can consider the AI industry to be in any sense retreating from prior claims. The existential problem remains an active near-future risk; you're hearing a lot about the jobs problem because it's already here, now, today. Do you not remember how much less capable AI systems were in 2023, and how implausible it seemed that they could become as good as they are now without new theoretical breakthroughs? |
|
|
| |
| ▲ | keybored 5 hours ago | parent | prev [-] | | In that sense the general public is less superstitious than many technologists. Some of the general public might anthropomorphize too hard. Which is pretty tame compared to the belief of the alien AI intelligence sprouting and killing us accidentally or intentionally. As far as the paperclip problem is concerned, we’ve already had that problem for a long time now in the form of good old fashioned human institutions. |
|
|
| ▲ | richardw 13 hours ago | parent | prev | next [-] |
| > The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income Problem for jobs is that there are 200 countries and all the earnings will go to a few. Universal basic income for everyone? Or just the US? Who gets to keep their house locations in a new fair world? The person whose parents bought in the right place 50 years ago? Who pays the money these models earn, if nobody clicks ads or does a job? What is income for if we don’t work and can just ask the AI for everything we want? What happens when the super smart AI comes up with “better” (more fair, consistent, etc) answers than you think you have to questions like the above? What if they end up socialist? Do we force it (and invite risk it escapes and fights us for the greater good) or give in to the presumably more thorough reasoning? |
|
| ▲ | retired 15 hours ago | parent | prev | next [-] |
| Needing less offices, less people driving to those offices, less A/C and heating for those offices and less resources building those offices could offset the energy usage of AI. |
| |
| ▲ | calgoo 14 hours ago | parent | next [-] | | We can just turn all the office buildings into datacenters, they already look like heating vents! cover them in solar panels on the outside to cover the windows, and done! | |
| ▲ | cortesoft 14 hours ago | parent | prev | next [-] | | The people still need to be somewhere, so while commuting could be reduced I am not sure about heating/cooling usage. | | | |
| ▲ | neonstatic 13 hours ago | parent | prev [-] | | Remote work accomplishes all that as Covid days proved. |
|
|
| ▲ | insane_dreamer 5 hours ago | parent | prev | next [-] |
| > The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income. Nice, but completely unrealistic. The whole reason why AI is/will be adopted by companies like wildfire is to cut costs and increase profits. If they have to pay taxes equivalent to what they were paying in labor (or anywhere close to that), then AI is for nothing. Business will never agree to it. So this will never happen unless there is some sort of social revolution that completely remakes the system. |
|
| ▲ | danaris 6 hours ago | parent | prev | next [-] |
| Your first critique has a massive hole in it, because it assumes that every person whose job is replaced by "AI" is actually going to be done as well by the "AI" as it was by the human. To the extent that it's even true that this is going to happen, that is laughably false. Yes; LLMs can answer some questions well (but unreliably) and with the right setups, can be rigged to perform some tasks well (but unreliably). There is no way they are ready to take over a single full-time job. If any employer tried, the number of errors in the performance of that job would jump by a huge amount, because LLMs are not reliable and cannot be made so. |
|
| ▲ | Tyrubias 15 hours ago | parent | prev | next [-] |
| I understand your points, but I think what scares people is that the solutions you propose are disregarded by our politicians. At least in the US, both politicians and the large donors funding them seem to be more and more allergic to anything resembling an universal basic income, and they do their best to scare people away with fearmongering about “communism”. The US is also doing a hard U-turn away from environmental protection and is trying to frame environmental conservation as radical and harmful. Other countries might be doing better on these fronts, but it’s definitely not a good sign that the US doesn’t seem to be on board with your first two solutions. In the more immediate run, I think the concern is that AI will reduce the ability of workers to collectively bargain and thereby grant the wealthy oligarchs even more control over their workers’ lives. |
| |
| ▲ | cortesoft 15 hours ago | parent | next [-] | | I completely agree that governments and power brokers will disregard these solutions unless forced. However, they will also disregard any attempt to slow down or halt AI progress in general, so it isn't like the people wanting to end AI in general are any more likely to succeed than those wanting to do what I propose. I personally feel my suggestions would be slightly more feasible to gain support for than trying to stop AI completely. The power brokers in control of AI currently certainly aren't going to stop developing and pushing AI, but they might be convinced that sharing the wealth is the only way to avoid massive revolt in the long run. While it is conceivable that the wealthy wouldn't need the masses for labor like they do now in the AI future, they still need to not be killed in a massive uprising when 90% of the population is unemployed and starving. While I know a lot of people think the plan is just to kill off that part of the population, that is not that easy to do even with an army of AI robots, and would likely be cheaper and easier to just share a bit of the productivity. I don't think it will be trivial, but I don't think it is impossible. | |
| ▲ | JumpCrisscross 15 hours ago | parent | prev [-] | | > politicians and the large donors funding them seem to be more and more allergic to anything resembling an universal basic income UBI has been a major donor priority, at least on the left. |
|
|
| ▲ | BrenBarn 7 hours ago | parent | prev | next [-] |
| There is another critique that is not specific to AI but I think is bigger than all of these: that a relatively small number of large companies, and the small number of very wealthy people who control those companies, have an outsize influence on many aspects of society. AI is the poster child for this right now, but tech companies in general are also reviled, and more generally all kinds of companies (media, fossil fuels, etc.) are targets of opprobrium. From this perspective, the main irritation of AI is that it is the biggest, most intrusive case of "some rich guy is messing with my life". This is driven largely from the willingness of a small number of rich people to lose large amounts of money shoving AI down everyone's throats in the hope that that will eventually lead to them recouping those losses. I believe a significant amount of AI criticism is really about this, and that means we need to resolve the overall issues of wealth inequality and economic skewing. People would be much less angry about AI if its development and ownership were more diffuse, and if the patterns of its use were more directly connected to its current observable abilities, rather than based on what some group of insiders thinks about how much its stocks may go up in the future. |
|
| ▲ | synecdoche 14 hours ago | parent | prev | next [-] |
| UBI drives inflation. All other effects follow from that. |
| |
| ▲ | cortesoft 13 hours ago | parent [-] | | I am not sure if inflation will work exactly the same in a world where AI/robots do all the work. Inflation is driven by scarcity. More demand for a fixed/limited resource drives up the price. Historically, every good and service humans bought followed this pattern, so we didn’t even have to consider an alternative. Already in our current economy, however, we have seen a good portion of our economy shift to things that do not have this characteristic. For example, take something like a video streaming service. The marginal cost for additional demand is small enough to be almost negligible; if everyone in the world decided they wanted a Netflix subscription, there wouldn’t suddenly be a shortage of streams or a run on episodes of The Great British Bake Off. They would have to build more datacenters, but the cost per additional user is tiny compared to almost every other traditional good that came before. If AI and Robots start doing all work, then this would spread to more of the economy. The increase in productive capacity would severely reduce the limitations that have historically driven inflation. We obviously have to invest in building robots and AI, but once we have enough robots they would be making more of themselves and we would be limited by natural resources, but we could use robots to get more of those, too… and we could focus on clean energy, since we would have plenty of robots to do that work, too. |
|
|
| ▲ | bdangubic 13 hours ago | parent | prev | next [-] |
| USA will never have UBI, period. So any idea that includes any mention of is an absolute non-starter. Outside of the USA, perhaps, but for us that is never happening. |
|
| ▲ | spwa4 15 hours ago | parent | prev | next [-] |
| In my opinion the main, and really only, issue: AI is a necessity. Everything from war (including defense departments), to jobs, to rental advertisements, to food packaging, to restaurant reviews, to news, to education, to programming, to architecture, to politics ... will have to change due to AI. Not changing them is not really an option. Everything needs to be figured out here. A lot of this will both cost money AND require people to change their jobs, their investments, their equipment, ... And they hate it. Everyone, including governments will have to adapt. And to add insult to injury, everything comes from the US and it's really expensive. |
|
| ▲ | paganel 15 hours ago | parent | prev | next [-] |
| > How do we treat AI creative work? We erase it and call out the ghouls “creating” that shit, simple. They deserve being called out for creating shit and poisoning our minds. |
|
| ▲ | happytoexplain 14 hours ago | parent | prev | next [-] |
| No, you're backwards. The first point is definitely the most important and most tricky. UBI is a dangerous distraction in this context. It's a mammoth cost to achieve an impoverished quality of life. It may be worth implementing in general, but it absolutely must stay out of the conversation about AI. It's like if the ruling class started announcing that they would like to imprison us all, and your "discussion" about the problem revolved around how we can make our future jail cells feel as nice as possible. We are allowed to regulate businesses. We simply don't. |
| |
| ▲ | cortesoft 14 hours ago | parent | next [-] | | What sort of regulation do you think is needed for this? | | |
| ▲ | SpicyLemonZest 13 hours ago | parent [-] | | I think frontier AI research should be outlawed until such time as there's a broad consensus on how society ought to deal with it. This would have to be coordinated internationally to be effective, but I think that would be achievable if the US sent a credible signal by forcibly shutting down any one of the major labs. | | |
| ▲ | cortesoft 12 hours ago | parent | next [-] | | Even supposing we could somehow get the political will to do this, how would you write such a law? What counts as “AI frontier research”? How would you write a regulation around that that isn’t trivial to bypass without banning general computing itself? | | |
| ▲ | SpicyLemonZest 12 hours ago | parent [-] | | As I said in a sibling comment, we're fortunate that training modern AIs requires large quantities of specialized compute. We just have to restrict GPU sales and outlaw GPU farms. I don't deny that it would be a seismic, controversial change, but I don't think it's terribly hard to implement if we can reach a consensus that we want to implement it. |
| |
| ▲ | neonstatic 13 hours ago | parent | prev [-] | | This is never going to happen. Is something can be done, it will be done. | | |
| ▲ | happytoexplain 12 hours ago | parent | next [-] | | >If something can be done, it will be done. What does this mean? It's obviously false on its face. | | |
| ▲ | neonstatic 4 hours ago | parent [-] | | It means that if something is physically possible, someone will be doing it, regardless of legal, moral, or social barriers. False on its face? Not that long ago, global public opinion was mortified at the news, that newborn twins in China have been genetically modified. I am old enough to remember the outrage in the late 90s as the world watched the first cloned sheep grow up, get sick, and die. It was possible to do, so someone had done it. The point is - with the use of law, morality, social pressure, we can moderate the frequency and scale of some phenomena, but we cannot stop it. I think this idea is what prevents some bans. "If the Chinese can do it, and we stop ourselves from doing it, they will gain an advantage and we would lose". Substitute "the Chinese" with whoever is the opponent at any given point in time and you have a rather plausible explanation for why things were the way they were. |
| |
| ▲ | SpicyLemonZest 13 hours ago | parent | prev [-] | | There were historical worries about whether a ban would be feasible, but frontier AI research as we understand it today requires large amounts of specialized compute. Even if we couldn't or wouldn't destroy the chips, we could imprison anyone who tries to start a large training run, the same way we imprison anyone who tries to buy enriched uranium. | | |
| ▲ | neonstatic an hour ago | parent [-] | | Yes, that is true, but it's not my point. I am not saying it'd be impossible to find people who are doing it. My point is that there will always be a group of people, who'd be willing to do potentially dangerous things as long as those things are possible and are believed to provide some sort of advantage. For that reason, those people would either be in decision making positions or have a good enough offer to decision makers. Speaking of uranium - I don't think AI is anything like it (although the AI industry propaganda really wants us to believe that), but even there we have examples of countries that were pursuing nuclear weapons both successfully and unsuccessfully as well as countries that could have them, but choose not to. So the ban itself isn't necessarily the main point here. |
|
|
|
| |
| ▲ | keybored 4 hours ago | parent | prev [-] | | It is quite telling that so many comments here are about UBI as a solution. UBI is a billionaire proposed solution, or distraction. Yeah, of course they want to keep control of the surplus and just have a sustenance spigot for the former workers. > We are allowed to regulate businesses. We simply don't. If workers are defunct, what are businesses? Also defunct. Business owners can’t gloat about not needing workers while at the same time claiming that their businesses have a right to life. What is a business owner sitting on a completely automated set of assets? Smaug sitting on his cache of gold. |
|
|
| ▲ | keybored 15 hours ago | parent | prev | next [-] |
| > The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. This is straightforward? This is a colossal task. Monumental. Billionaires own it. That’s the political status quo. You could build something to counter those centers of power. But from what base? Well-paid software developers have scoffed at or been ignorant of worker organizing for, maybe forever? But I have good paycheck and equity... Now what? |
| |
| ▲ | cortesoft 15 hours ago | parent [-] | | 'Straightfoward' as in there is a clear way to solve the issue, not that it will be easy to enact it. | | |
| ▲ | happytoexplain 13 hours ago | parent | next [-] | | But it's not a clear way to solve the issue. UBI, even if enacted tomorrow, doesn't stop the enormous crash of the middle-class, and the fallout of that. Maybe it will stop some people from literally dying - that's "solved"? It's a small buffer at the very worst end of a gigantic problem. The word "solve" is totally ridiculous. | |
| ▲ | keybored 6 hours ago | parent | prev [-] | | Okay you’re right. In some sense of the word it is straightforward. But I still think it is not straightforward compared to most things. I can get more muscle mass at the gym. That is straightforward. Only a few things makes it not easy. But “share the productivity with society at large”... you have to collapse so many more variables. - How to organize political resistance against AI tech billionaires - How to not get co-opted by counter-measures by AI tech billionaires - How to resist false promises (backed by nothing) that some AI tech billionaire will enact UBI for everyone so everything will be fine (those with all the power can withdraw whatever they want at any point) - How to deal with white collar competition in the interim period before automation: everyone using AI and nodding along with it[1] just to not “fall behind” - How to potentially fight against a small minority (AI tech billionaires) but that now might have enough megawatts to turn their stochastic parrots against any dissenters And maybe more. [1] https://news.ycombinator.com/item?id=47904777 |
|
|
|
| ▲ | jiggawatts 15 hours ago | parent | prev | next [-] |
| > mislead and commit fraud at scale This is the "safety" messaging that OpenAI and Anthropic keep harping on and on, and on about, while whistling a merry tune as they turn around and sell AI to the US military and worse, to the tune of $billions/year already. The "and worse" needs elaboration, because fundamentally the single biggest cash cow for AI vendors will be (and maybe already is) implementing a dystopian future where everything we say, type, or do will not just be recorded but also: read, analysed, and cross-correlated by unfeeling heartless machines tasked with keeping us in line. I'm not being paranoid, President Biden said as much, but only in reference to China. If you think only China has motivation to use AI to keep a lid on dissent, I have a bridge to sell you. And if you think the Land Of The Free(tm) will never abuse AI in this manner, well... I have some bad news. You may want to sit down. Here in Australia, the cyberpunk dystopia is already starting to be rolled out. A customer of ours asked their IT team to hook up a variety of HR-related information sources to their new AI system tasked with making recommendations for hiring, promotion, and demotion. Welcome to 1984, citizen. |
| |
| ▲ | dualvariable 13 hours ago | parent [-] | | Yeah, AI-enabled surveillance capitalism is likely to be every bit as bad as what people imagine China is doing with their social credit scores. And the scary thing is that you can probably easily sell it to Democratic voters if you track racism scores for people, so you can filter people out of your dating pool or job/rental applications. Most people don't care about privacy as a fundamental right, and they'll roll over and compromise if you give them a way to track what they hate. You just need to make sure it is "bipartisan" and it'll be wildly popular. |
|
|
| ▲ | sofixa 15 hours ago | parent | prev | next [-] |
| There are a few other issues. Like copyright. All modern LLMs are built on troves of copyrighted material that was used in their training. AI companies are claiming this is fair use, while pretty much all of the copyright holders would strongly disagree. This is going to get litigated for years, but regardless of what various legal systems decide, morally, people can be against this. And people are already sick and tired of AI-generated content being used to replace human made content, be it on Spotify or TikTok. This is part "AI replacing humans", part "I'm being scammed by lower quality content". |
| |
| ▲ | MBCook 15 hours ago | parent | next [-] | | And we’ve seen the cases of people trying to use the AIs to train new AIs! OpenAI: We’re allowed to steal everything to train our AI and you can’t complain Developer: Ok, I’ll use your AI to train mine OpenAI: NO NOT LIKE THAT, UNFAIR | |
| ▲ | cortesoft 15 hours ago | parent | prev [-] | | I feel like this is covered by the last question about how we deal with AI and creative works. |
|
|
| ▲ | watwut 14 hours ago | parent | prev | next [-] |
| > The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. The very same CEOs are extremely against social support, any taxes for themselves and any govermental agencies that help or protect people. How is can this be possibly easiest in the world of Thiel, Musk, Trump, Vance, Palantier and overtone window moving toward economically conservative for years. |
|
| ▲ | contingencies 14 hours ago | parent | prev | next [-] |
| Picasso famously said "Computers are useless, they can only give you answers." You can't put things back in the bag. Perhaps the true underlying social problems are: 1. There's too many humans and not enough jobs. 2. The capitalist system only rewards profit seeking and cost externalization. 3. Our democratic representation myth is dead and buried. 4. Even in the developed world, middle-class security is gone. So here's my question: given the current global system has failed and is clearly in its death throes, as a pan-national species how can we transition to a less mono-focal economic rationalism driven means of governance and self-organization without turning in to an autocracy or reinforcing negative nationalist bloc-level thinking that will tie us in to the same old human-thump-human stone age ape-ism and environmental cost externalization? Perhaps AI can help in areas like improved education, improved media, proposals for improved government process or process transition for enhanced efficiency. Enforce transparency and accountability in the halls of power by reducing human process and corruption. Public auditable decision making and public auditable oversight. It's at least potential grounds for partial optimism. The best I can summon under present conditions. Of course, we want to avoid a dystopian global AI autocracy, the technocratic basis for which we have already well established, but if you view the present system as a dystopian human autocracy with the same technocratic basis (an increasingly rational perspective given recent events), then it starts to look more rosy. |
|
| ▲ | Rekindle8090 15 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | Devasta 15 hours ago | parent | prev | next [-] |
| > The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. If the Epstein class wouldn't go for something like this in a world where they needed workers to produce, the idea that they will when we are surplus to requirement is inconceivable. |
| |
| ▲ | cortesoft 15 hours ago | parent [-] | | I should have said 'most straightforward', rather than easiest, because I agree it will not be easy to make it happen. |
|
|
| ▲ | Dwedit 14 hours ago | parent | prev [-] |
| I think you left out the part about AI being a plagiarism machine. |