| ▲ | wolvesechoes 4 hours ago |
| I am bit tired of such discussions. I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot. Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment. |
|
| ▲ | tptacek 11 minutes ago | parent | next [-] |
| This argument can be used, and has been used, about every innovation in automation since the dawn of the industrial revolution. |
|
| ▲ | slibhb 2 hours ago | parent | prev | next [-] |
| > What matters in the end is that this tech is not to empower a common person (although it could). How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person". It lowers barriers! A lot of the anti-AI sentiment on HN concerns people losing their jobs. I don't think this will happen: programmers who know what they're doing are going to be way, way more effective at using AIs to generate code than others. But even if it is true and we do see job losses in tech: are software devs really "in a precarious position"? Do they really qualify as "those that have little"? Seems like a fantasy to me. Computer programmers have done great over the past 30 years. More broadly, anti-AI sentiment comes from people who dislike change. It's hard to argue someone out of that position. You're allowed to prefer stasis. But the world moves on and I think it's best to remain optimistic, keep an open mind, and make the most of it. |
| |
| ▲ | bunderbunder 10 minutes ago | parent | next [-] | | It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay. $20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose." Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket. That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience. | |
| ▲ | vips7L an hour ago | parent | prev [-] | | > I don't think this will happen Block just laid off 40% of their company citing AI. | | |
| ▲ | slibhb an hour ago | parent | next [-] | | Tech companies have been laying off employees for a while now. I think it's mostly due to pandemic overhiring and higher interest rates but I suppose we'll see. | | |
| ▲ | vips7L 25 minutes ago | parent [-] | | I agree that AI was not the _actual_ reason, however, it did allow them to do massive layoffs without admitting they are doing poorly and not taking a massive hit to their stock price. |
| |
| ▲ | CPLX an hour ago | parent | prev [-] | | > Block just laid off 40% of their company Because the company was being horribly run and over hired and "pivoted to blockchain" for no fucking reason. > citing AI. Because it's 2026 and they thought that would work to bullshit a few people about point one, which apparently it did. |
|
|
|
| ▲ | wepple 4 hours ago | parent | prev | next [-] |
| > It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with. It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something” |
| |
| ▲ | wolvesechoes 3 hours ago | parent | next [-] | | > It could also end up freeing us from every commercial dependency we have Yeah, companies that develop and push this tech definitely have this in mind. > I don’t know where you’re going with “I’m unhappy this is happening and someone should do something I am not surprised because I didn't write anything like it. | | |
| ▲ | margalabargala an hour ago | parent [-] | | > > I don’t know where you’re going with “I’m unhappy this is happening and someone should do something > I am not surprised because I didn't write anything like it. You're right, there was no "someone should do something" call to action in your original comment. |
| |
| ▲ | idopmstuff 2 hours ago | parent | prev | next [-] | | It's also worth nothing that the "our" in that sentence is just SWEs, who are a pretty small group in the grand scheme of things. I recognize that's a lot of HN, but still bears considering in terms of the broader impact outside of that group. I'm a small business owner, and AI has drastically increased my agency. I can do so much more - I've built so many internal tools and automated so many processes that allow me to spend my time on things I care about (both within the business but also spending time with my kids). It is, fortunately, and unfortunately, the nature of a lot of technology to disempower some people while making lives better for others. The internet disempowered librarians. | | |
| ▲ | wolvesechoes 2 hours ago | parent [-] | | > It's also worth nothing that the "our" in that sentence is just SWEs It isn't, it just a matter of seeing ahead of the curve. Delegating stuff to AI and agents by necessity leads to atrophy of skills that are being delegated. Using AI to write code leads to reduced capability to write code (among people). Using AI for decision-making reduces capability for making decisions. Using AI for math reduces capability for doing math. Using AI to formulate opinions reduces capability to formulate opinions. Using AI to write summaries reduces capability to summarize. And so on. And, by nature, less capability means less agency. Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them Not to mention utilizing AI for control, spying, invigilation and coercion. Do I need to explain how control is opposed to agency? |
| |
| ▲ | LetsGetTechnicl 2 hours ago | parent | prev | next [-] | | It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
Lmfao LLM's can barely count rows in a spreadsheet accurately, this is just batshit crazy.edit: also the solution here isn't that every one writes their own software (based on open source code available on the internet no doubt) we just use that open source software, and people learn to code and improve it themselves instead of off-loading it to a machine | | |
| ▲ | margalabargala an hour ago | parent [-] | | This is one of those things where people who don't know how to use tools think they're bad, like people who would write whole sentences into search engines in the 90s. LLMs are bad at counting the number of rows in a spreadsheet. LLMs are great at "write a Python script that counts the number of rows in this spreadsheet". | | |
| ▲ | teolandon an hour ago | parent [-] | | Do you think asking any LLM in the next 100 years to "write a Python script that generates an OS" will work? | | |
| ▲ | antonyh 39 minutes ago | parent | next [-] | | Yes, for some definition of OS. It could build a DOS-like or other TUI, or a list of installed apps that you pick from. Devices are built on specifications, so that's all possible. System API it could define and refine as it goes. General utilities like file management are basically a list of objects with actions attached. And so on... the more that is rigidly specified, the better it will do. It'll fail miserably at making it human-friendly though, and attempt to pilfer existing popular designs. If it builds a GUI, it's be a horrible mashup of Windows 7/8/10/11, various versions of OSX / MacOS, iOS, and Android. It won't 'get' the difference between desktop, laptop, mobile, or tablet. It might apply HIG rules, but that would end up with a clone at best. In short, it would most likely make something technically passable but nightmareish to use. | | |
| ▲ | margalabargala 4 minutes ago | parent [-] | | Given 100 years though? 100 years ago we barely had vacuum tubes and airplanes. Given a century the only unreasonable part is oneshotting with no details, context, or follow up questions. If you tell Linus Torvalds "write a python script that generates and OS", his response won't be the script, it'll be "who are you and how did you get into my house". |
| |
| ▲ | margalabargala 20 minutes ago | parent | prev [-] | | Considering how simple "an OS" can be, yes, and in the 2020s. If you're expecting OSX, AI will certainly be able to make that and better "in the next 100 years". Though perhaps not oneshotting off something as vague as "make an OS" without followup questions about target architecture and desired features. |
|
|
| |
| ▲ | ModernMech an hour ago | parent | prev [-] | | What happens when they decide it's a national security threat and an act of domestic terrorism to use AI to undermine commercial dependencies? We're all acting like AI isn't being invented within the context of and used by a fascist regime. |
|
|
| ▲ | phyzix5761 2 hours ago | parent | prev | next [-] |
| At some point, if most people lose their jobs, you have no market to sell your services to. So, either, new jobs have to be created in order to keep the capitalism machine running, or you have to provide for the needs of every human being from whatever you're doing with your AI. Otherwise, a lot of hungry people revolt and you have violence against these businesses. I think new jobs will be created because AI is always limited by hardware and its current capabilities. Businesses, in order to compete, want to do things their competitors aren't currently doing. Those business needs always go beyond the current technological capabilities until the tech catches up and then they lather, rinse, repeat. |
| |
|
| ▲ | spacecadet 4 hours ago | parent | prev | next [-] |
| Demand full automation. Demand universal basic income. Notice how the later is nearly absent from the conversation. Another distraction is AGI that which is a danger to humanity- the only danger is people... |
| |
| ▲ | pixl97 2 hours ago | parent [-] | | > the only danger is people... Simply put, no it is not. But on the reverse, the first danger with AI is people. Over the longer term it will look like this. The rich 'win' the world by using AI to enslave the rest of mankind and claim ownership over everything. This will suck and a lot of us will die. The problem is this doesn't solve the greed that cause the problem in the first place. The world will still be limited in a resources of something which will end with the rich in a dick measuring contest and to win that contest they will put more and more power in AI and they connive and fight each other. Eventually the AI has enough power that it kills us all, intentionally or not. We'll achieve nearly unlimited capability long before we solve the problem of unlimited greed and that will spell our end. |
|
|
| ▲ | simmerup 4 hours ago | parent | prev [-] |
| I guess you didn't read the article? |
| |