| |
| ▲ | khafra 5 hours ago | parent | next [-] | | Whether the labor theory of value is right or wrong, the "real split" you describe will soon no longer exist. Capital owners will live on the labor of their capital. Non-capital-owners will live on the largesse of capital, or will not live at all. Unless we muster the political will to stop AI development, internationally, until we can be certain of our ability to durably imbue it with the intrinsic desire to keep humans around, doing human things. | | |
| ▲ | grafmax 3 hours ago | parent | next [-] | | Capital is a commodity, just like a business' product. It does not produce value. Labor does. This is a central point of LTV! We witnessed the same thing with looms and other automation in the Industrial Revolution. Capital that helps you produce more. But owners faced with increased competition under commoditized production see their profit margins fall. Thus they will turn to squeezing workers - the source of value - for profit in the newly commoditized landscape - exactly what happened during the Industrial Revolution. It was only when workers got their act together and organized that this decline was stopped and reversed. | | |
| ▲ | khafra 34 minutes ago | parent [-] | | Ok, but when the looms can autonomously analyze the market, design the products, organize purchasing and sales channel, run production, and deliver the products, the workers will not be squeezed. We will be discarded. |
| |
| ▲ | snek_case an hour ago | parent | prev | next [-] | | > imbue it with the intrinsic desire to keep humans around, doing human things. It's not the AI you have to convince, it's your government and the people running tech companies. Dario Amodei was cheering for AI to take all programming jobs (along with the others). If that happened, it would be an unmitigated disaster for millions of people. Imagine a student who comes out of a CS major with tons of student debt. How much sympathy does Dario feel for this person? Getting him to STFU would be a good first step. > the political will to stop AI development The reason that's not likely is that it's an arms race. You stop AI research here, but how can you trust that China and Russia are doing the same? Unlike nuclear bombs, the potential harms are less tangible. | |
| ▲ | chrisvalleybay 2 hours ago | parent | prev | next [-] | | I think there's a piece missing here. Capital owners are humans too, and what humans want (perhaps especially the ones who accumulate capital), is to be at the top of a hierarchy. But a hierarchy needs participants. If nobody else is playing the game, there's no top to be on top of. Strip away the people willing to compete, admire, envy, or just show up, and the whole structure collapses. It's not clear that a world of pure capital-on-AI-labor actually gives them what they're after. It sounds lonely and meaningless to me. I don't think that it would feed the black hole in their chests. | | |
| ▲ | khafra an hour ago | parent | next [-] | | I think it's much more likely that the AI turns out not to be as compliant as the capital owners expect, and they die too. However, that's not useful in predicting what capital owners will do, because they follow their local incentives. "If everyone keeps doing X, we will all be worse off" does not help unless you can create local incentives that point toward an equilibrium where everyone stops doing X. In this case, no capital owner is individually better off by unilaterally refusing to chase more efficient returns on their capital. We would need an international agreement, with enforcement mechanisms, like I mentioned above. | |
| ▲ | wolvesechoes 2 hours ago | parent | prev [-] | | Lot of effort was spent to naturalize the current state of affairs and value system, even if there is nothing natural and obvious about it. Humans for millennia have lived with much higher political and social flexibility, with hierarchies built and teared down even seasonally, or with role of property and wealth shifting back and forth. Of course the structure exists because we allow it, that's the easy part. Hard part is - why do we allow it? | | |
| ▲ | chrisvalleybay 2 hours ago | parent [-] | | I think in part because we have a black hole in our chest, and we are searching for ways to fill it. We attempt to fill it through worship at the altar of materialism, celebrity, etc.
We are doing this to quiet the roar from the black hole. Actually stepping away would require us to sit with stillness, and then to forge a new path, a new life. It's frightening. |
|
| |
| ▲ | eloisius 4 hours ago | parent | prev | next [-] | | I, for one, am looking forward to me and a band of my closest friends and family raiding heavily fortified data centers guarded by Boston Dynamics robot dogs to steal clean drinking water for our underground village. We might even hit a caravan of autonomous trucks carrying cricket protein powder in the same night. | | | |
| ▲ | edgyquant 3 hours ago | parent | prev | next [-] | | The people without capital will just form their own economies and continue to exist, likely they will kill the capital owners as well if it really came to that. | | |
| ▲ | khafra 3 hours ago | parent [-] | | What's your plan for beating the autonomous drone swarms without capital? | | |
| ▲ | dns_snek an hour ago | parent [-] | | My plan is that we don't let it get far enough that a small group of people gain control of a fully integrated robotic supply chain powering an unbeatable war machine. If it comes to that then the world is already doomed. In practice this currently means voting for political options who can correctly identify concentration of power as the root cause of most of our current and future problems, and who pledge to actually do something about it. |
|
| |
| ▲ | forgetfreeman 5 hours ago | parent | prev | next [-] | | "Non-capital-owners will live on the largesse of capital, or will not live at all." That's been tried several times now and has a tendency to end very badly for capital. You'd think folks with even a grade school level of historical literacy would know better than to stick a fork in that outlet. | | |
| ▲ | khafra 3 hours ago | parent [-] | | It's never been tried before. Capital always required human labor, to be productive.
Capital has never closed in on the ability to operate, maintain, defend, and expand itself without human assistance, as it is closing in on that ability now. | | |
| ▲ | daveguy 3 hours ago | parent | next [-] | | It's really not. The capital owners just think it is. | | |
| ▲ | khafra 2 hours ago | parent | next [-] | | We'd all be a lot safer--even the capital owners--if today's robotics and multimodal intelligence were near the ceiling of what's possible, or even near the bend in the logistic curve where things slow down a lot. I haven't seen evidence of that. I see evidence of rapid advances in task length, general capabilities, and research and development capabilities in AI, and generality, price, and autonomy in robotics. How much headroom in these capabilities do you believe we have, before a data center can protect and maintain itself and an on-site power plant? Before robots can run a robot factory? | | |
| ▲ | daveguy an hour ago | parent [-] | | I think we are still 25+ years away from that kind of automation. People are still confusing plausible text generation with adaptable dynamic intelligence. See also: "Shall I implement it? No."[0] We are getting some awesome tools that sound like science fiction from decades ago, but the intelligence is hollow and brittle. In my opinion, we just don't have the algorithms or computational bandwidth necessary. [0] https://news.ycombinator.com/item?id=47357042 | | |
| ▲ | khafra an hour ago | parent [-] | | We're absolutely not there yet, algorithmically or with compute. Algorithms keep getting better, though, despite the bitter lesson; and data centers keep getting bigger. If you showed a conversation between Terry Tao or Steve Yegge and their AI collaborators to someone from 2021, they would consider it beyond obvious that it's AGI. Today, we know they still have some shortcomings; but in another 5 years, what looks to us today like it's beyond obviously ASI may well be enough for catastrophic, irrecoverable outcomes. |
|
| |
| ▲ | gom_jabbar 2 hours ago | parent | prev [-] | | The real transition would be from human-owned capital to self-owned capital. You are right that current capabilities and autonomy don't allow for that yet. |
| |
| ▲ | forgetfreeman 42 minutes ago | parent | prev [-] | | Is that what you think is happening right now? This line of reasoning brings to mind the luxury bunkers in New Zealand that are so popular with a certain type of folks. I'm guessing the sales brochures on those things don't mention stuff like the outcome of an 80lb bag of cement poured into the ventilation or the fact that heavy machinery is ubiquitous and shockingly simple to operate. Thinking capital can decouple itself from the larger populace is comically flawed for similar reasons. See also: XKCD where the crypto guy gets worked over with a wrench for his password. |
|
| |
| ▲ | Gud 5 hours ago | parent | prev [-] | | I agree. |
| |
| ▲ | 6 hours ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | snek_case an hour ago | parent | next [-] | | I've always thought of myself as more "centrist" (feel free to make fun of me), but seeing so many tech CEOs cheer for layoffs and destruction of the job market has been a bit of a wake up call. Also just being confronted with the sheer idiocy of these people. They are making hundreds of millions of dollars a year, but they barely understand the tech they are cheering for. They act as though being broadly "bullish on AI" and being overly enthusiastic about its short-term potential was some kind of visionary stance, when in fact they are just repeating the same ideas as every other idiot in the silicon valley VC bubble. My personal bet would be that in the medium term, there will be a reversal of the idiotic belief that you can immediately just lay off developers because of LLMs. If your developers are more productive because of LLMs, you still have an advantage by having more developers than the competition. There's also a lot of institutional knowledge that's just not documented. You fire key people, you can cripple your organization. In the longer term, I think AI will eventually take jobs, and unfortunately, it will have major negative societal impact. I doubt that our governments will be proactive in trying to anticipate this. They will just play damage control. There's probably going to be an anti-AI social movement. You'll have the confluence of more and more disinformation and AI slop online along with more and more job loss. There are probably going to be riots. Some people think UBI is inevitable. I think the problem is that if the government puts UBI in place, they will only give you the minimum necessary so that you don't starve. Just enough to afford to rent a bedroom, eat processed food and stay online all day. | |
| ▲ | wolvesechoes 2 hours ago | parent | prev [-] | | > Interesting to see more of this thinking on Hacker News I am on this site because it is one of the less shitty places on the Internet (in terms of usability, privacy etc.) to have some form of discussion, but I never identified as a "hacker", "techie", "entrepreneur" or "temporarily embarrassed billionaire". AI didn't change my view on anything, except it has shown me how blind and naive people can be. Of course I tend to focus on aspects that are being discussed here (context of software engineering). |
|