| ▲ | f154hfds 3 days ago |
| The post script was pretty sobering. It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. This is a pretty depressing place to be, because most emerging technologies provide us with exciting new possibilities whereas this technology seems only exciting for management stressed about payroll. It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome. |
|
| ▲ | stack_framer 3 days ago | parent | next [-] |
| > It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it! I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code. I never knew there was an entire subclass of people in my field who don't want to write code. I want to write code. |
| |
| ▲ | zparky 3 days ago | parent | next [-] | | It's been blowing my mind reading HN the past year or so and seeing so many comments from programmers that are excited to not have to write code. It's depressing. | | |
| ▲ | IanCal 3 days ago | parent | next [-] | | There are three takes that I think are not depressing: * Being excited to be able to write the pieces of code they want, and not others. When you sit down to write code, you do not do everything from scratch, you lean on libraries, compilers, etc. Take the most annoying boilerplate bit of code you have to write now - would you be happy if a new language/framework popped up that eliminated it? * Being excited to be able to solve more problems because the code is at times a means to an end. I don't find writing CSS particularly fun but I threw together a tool for making checklists for my kids in very little time using llms and it handled all of the css for printing vs on the screen. I'm interested in solving an optimisation issue with testing right now, but not that interested in writing code to analyse test case perf changes so the latter I got written for me in very little time and it's great. It wasn't really a choice of me or machine, I do not really have the time to focus on those tasks. * Being excited that others can get the outcomes I've been able to get for at least some problems, without having to learn how to code. As is tradition, to torture a car analogy, I could be excited for a car that autonomously drives me to the shops despite loving racing rally cars. | | |
| ▲ | wakawaka28 3 days ago | parent | next [-] | | Those are all good outcomes, up to a point. But if this stuff works TOO well, most or maybe all of us will have to start looking at other career options. Whatever autonomy you think you have in deciding what the AI does, that can ultimately be trained as well, and it will be the more people use it. I personally don't like it when others who don't know how to code are able to get results using AI. I spent many years of my life and a small fortune learning scarce skills that everyone swore would be the last to ever be automated. Now, in a cruel twist of fate, those skills are being automated and there is seemingly no worthwhile job that can't be automated given enough investment. I am hopeful because the AI still has a long way to go, but even with the improvements it currently has, it might ultimately destroy the tech industry. I'm hoping that Say's Law proves true in this case, but even before the AI I was skeptical that we would find work for all the people trying to get into the software industry. | | |
| ▲ | badsectoracula 2 days ago | parent [-] | | > I personally don't like it when others who don't know how to code are able to get results using AI. Sounds like for many programmers AI is the new Visual Basic 6 :-P | | |
| ▲ | wakawaka28 2 days ago | parent [-] | | It's worse than that lol. At least with VB 6 and similar scripting languages, there is still code getting written. Now we have complete morons who think they're software developers because they got some AI to shit out an app for them. This is going to affect how people view the profession of software engineering all around. |
|
| |
| ▲ | ares623 3 days ago | parent | prev [-] | | Except in this case you won't be able to afford going to the shops anymore. Or even if the shops will still be around. What use is an autonomous car if you can't use it. |
| |
| ▲ | zahlman 3 days ago | parent | prev | next [-] | | I suspect, rather strongly, that what really specifically wears programmers down is boilerplate. AI is addressing that problem extremely well, but by putting up with it rather than actually solving it. I don't want the boilerplate to be necessary in the first place. | | |
| ▲ | projektfu 3 days ago | parent | next [-] | | Or, for me, yak shaving. I start a project with enthusiasm and then 8 hours later I'm debugging an nginx config file or something rather than working on the core project. AI gets a lot of that out of the way if you let it, and you can at least let it grind on that stuff while you think about other things. | | |
| ▲ | zahlman 3 days ago | parent [-] | | For me, the yak shaving is the part where I get the next project idea... |
| |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | seanmcdirmid 3 days ago | parent | prev | next [-] | | It is fun. It takes some skill to organize a pipeline to generate code that would be tedious to write and maintain otherwise. You are still writing stuff to instruct the computer, but now you have something taking natural language instructions and generating code and code test assets. There might have been people who were happy to write assembly that got bummed about compilers. This AI stuff judt feels like a new way to write code. | | |
| ▲ | johnnyaardvark a day ago | parent [-] | | I've heard this take a few times, but I'm not convinced using general language is the new way to write code (beyond small projects). Inevitably AI will writes things in ways you don't intend. So now you have to prompt it to change and hope it gets it right. Oh, it didn't. Prompt it again and maybe this time will work. Will it get it right this time? And so on. It's so good at a lot of things, but writing out whole features or apps in my experience seems good at first, but then it turns out to be a time sync of praying it will figure it out on this next prompt. Maybe it's a skill issue for me, but I've gotten the most efficiency out of having it review code, pair with it on ideas and problems, etc. rather than actually writing the majority of code. | | |
| ▲ | seanmcdirmid 19 hours ago | parent [-] | | Until you've actually done it yourself, it will probably sound like vapor ware. The only question is how much energy are you willing to spend, in terms of actual energy (because you are making more calls to the AI) and yes, setting up your development pipeline with N LLM calls. It is really like micro-managing a very junior very forgetful dev but they can read really fast (and they mostly remember what they read for a few minutes at least, they actually know more about something than you do if they have a manual about it on hand). Of course, if its just writing the code once, you don't bother with the junior dev and write the code yourself. But if you want long term efficiency, you put the time into your team (and team here is the AI). |
|
| |
| ▲ | youoy 2 days ago | parent | prev | next [-] | | I think that the main missunderstanding is that we used to think programming=coding, but this is not the case. LLMs allow people to use natural language as a programming language, but you still need to program. As with every programing language, it requires you to learn how to use it. Not everyone needs to be excited about LLMs, in the same way that C++ developers dont need to be excited about python. | |
| ▲ | xyzwave 2 days ago | parent | prev | next [-] | | I hate writing code, but love debugging. LLMs have been a godsend for banging out boilerplate and getting things 95% of the way there. Now I spend most of my time on the hard stuff (debugging, refactoring), while building things that would have taken weeks in days. It’s honestly made the act of building software more enjoyable and rewarding. | |
| ▲ | xnx 3 days ago | parent | prev | next [-] | | Some carpenters like to make cabinets. Some just like to hammer nails. | |
| ▲ | solumunus 2 days ago | parent | prev | next [-] | | Do you really think the creative or intellectual element of programming is the tapping of keys? I don't understand this at all. I enjoy solving problems and creating elegant solutions. I'm spending less time tapping keys and more time engineering solutions. If tapping keys is the most fun part for you, then that's fine! But let's not pretend THAT is the critical part of software engineering. Not to mention, it's not all or nothing. The options aren't writing code or not writing code. You can selectively not write any boring code and write 100% of the bits you find interesting or care about. If an LLM is failing to deliver what is in my minds eye then I simply step in and make sure the code is quality... I'm doing more and better software engineering, that's why I'm happy, that's the bit that scratches my itch. | |
| ▲ | DevDesmond 3 days ago | parent | prev [-] | | Perhaps consider that I still think coding by prompting is just another layer of abstraction on top of coding. I'm my mind, writing the prompt that generates the code is somewhat analogous to writing the code that generates the assembly. (Albeit, more stochastically, the way psychology research might be analogous to biochemistry research). Different experts are still required at different layers of abstraction, though. I don't find it depressing when people show preference for working at different levels of complexity / tooling, nor excitement about the emergence of new tools that can enable your creativity to build, automate, and research. I think scorn in any direction is vapid. | | |
| ▲ | layer8 3 days ago | parent [-] | | One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that. | | |
| ▲ | youoy 2 days ago | parent | next [-] | | It completely depends on the way you prompt the model. Nothing prevents you from telling it exactly what you want, to the level of specifying the files and lines to focus on. In my experience anything other than that is a recepy for failure in sufficiently complex projects. | | |
| ▲ | layer8 2 days ago | parent [-] | | Several comments can be made here: (1) You only control what the LMM generates to the extent that you specify precisely what it should generate. You cannot reasons about what it will generate for what you don't specify. (2) Even for what you specify precisely, you don't actually have full control, because the LLM is not reliable in a way you can reason about. (3) The more you (have to) specify precisely what it should generate, the less benefit using the LLM has. After all, regular coding is just specifying everything precisely. The upshot is, you have to review everything the LLM generates, because you can't predict the qualities or failures of its output. (You cannot reason in advance about what qualities and failures it definitely will or will not exhibit.) This is different from, say, using a compiler, whose output you generally don't have to review, and whose input-to-output relation you can reason about with precision. Note: I'm not saying that using an LLM for coding is not workable. I'm saying that it lacks what people generally like about regular coding, namely the ability to reason with absolute precision about the relation between the input and the behavior of the output. |
| |
| ▲ | yunwal 3 days ago | parent | prev [-] | | You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself! | | |
| ▲ | palmotea 2 days ago | parent [-] | | >> One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that. > You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself! You missed the key point. You can't predict and LLM's "outcome with high precision." Looking at the output and evaluating it after the fact (like you describe) is an entirely different thing. | | |
| ▲ | yunwal 2 days ago | parent [-] | | For many things you can though. If I ask an LLM to create an alert in terraform that triggers when 10% of requests fail over a 5 minute period and sends an email to some address, with the html on the email looking a certain way, it will do exactly the same as if I looked at the documentation, and figured out all of the fields 1 by 1. It’s just how it works when there’s one obvious way to do things. I know software devs love to romanticize about our jobs but I don’t know a single dev who writes 90% meaningful code. There’s always boilerplate. There’s always fussing with syntax you’re not quite familiar with. And I’m happy to have an AI do it | | |
|
|
|
|
| |
| ▲ | rester324 3 days ago | parent | prev | next [-] | | I love to write code too. But what usually happens is that I go through running the gauntlet of proving how brilliant code I can write in a job interview, and then later conversely being paid for listening to really dumb conversations of our stakeholders and sitting in project planning, etc meetings just so that finally everybody can harass me to implement something that a million programmer implemented before me a million times, at which point the only metric that matters to either my fellow developers or my managers or the stakeholders is the speed of churning the code out, quality or design be damned. So for this reason in most cases in my work I use LLMs. How any of that comes down to an investment portfolio manager as writing "world class code" by LLMs is a mistery to me. | |
| ▲ | doug_durham 3 days ago | parent | prev | next [-] | | Writing code is my passion, and like you I'm amazed I get paid to do it. That said in any new project there is a large swath of code that needs to be written that I've written many times before. I'm happy to let the LLM write the low value code so I can work on the interesting parts. Examples of this type of code are argument parsers and interfacing with REST interfaces. I add no value there. | |
| ▲ | citrin_ru 2 days ago | parent | prev | next [-] | | > I never knew there was an entire subclass of people in my field who don't want to write code. Some people don't enjoy writing code and went into software development only because it's a well paid and a stable job. Now this trade is under the thread and they are happy to switch to prompting LLMs. I do like to code so use LLMs less then many my colleagues. Though I don't expect to see many from this crowd in HM, instead I expect here to see entrepreneurs who need a product to sell and don't care if it is written by humans or by LLMs. | |
| ▲ | averageRoyalty 3 days ago | parent | prev | next [-] | | So write code. Maybe post renaissance many artists no longer had patrons, but nothing was stopping them from painting. If your industry truely is going in the direction where there's no paid work for you to code (which is unlikely in my opinion), nobody is stopping you. It's easier than ever, you have decades of personal computing at your fingertips. Most people with a thing they love do it as a hobby, not a job. Maybe you've had it good for a long time? | | |
| ▲ | tjr 3 days ago | parent | next [-] | | From the GNU Manifesto: I could answer that nobody is forced to be a programmer. Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else. https://www.gnu.org/gnu/manifesto.en.html | |
| ▲ | harimau777 3 days ago | parent | prev [-] | | That's tough to do without time and money. Which is something we certainly won't have if the decent jobs get automated out of existence. |
| |
| ▲ | marcosdumay 3 days ago | parent | prev | next [-] | | I'm quite ok with only writing code in my personal time. In fact, if I could solve the problems there faster, I'd be delighted. Instead, I've reacted to the article from the opposite direction. All those grand claims about stuff this tech doesn't do and can't do. All that trying to validate the investment as rational when it's absolutely obvious it's at least 2 orders of magnitude larger than any arguably rational value. | |
| ▲ | georgeecollins 3 days ago | parent | prev | next [-] | | I also love to code, though it's not what people pay to do anymore. You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. The question is, does it happen in two years or a hundred years? My motto: don't predict, prepare. | | |
| ▲ | djeastm 2 days ago | parent | next [-] | | >You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. Lots of wiggle room between "never" or "usually". We're not all riding Segways or wearing VR goggles. Seems wiser to work on case-by-case basis here. | |
| ▲ | gspr 3 days ago | parent | prev [-] | | > You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. Really? Are you sure there isn't a lot of confirmation bias in this? Do you really have a good handle on 100-year-old tech hypes that didn't deliver? All I can think of is "flying everything". | | |
| |
| ▲ | kace91 2 days ago | parent | prev | next [-] | | >I never knew there was an entire subclass of people in my field who don't want to write code. Regardless of AI this has been years in the making. “Learn to code” has been the standard grinder cryptobro advice for “follow the money” for a while, there’s a whole generation of people getting into the industry for financial reasons (which is not wrong, just a big cultural shift). | |
| ▲ | thendrill a day ago | parent | prev [-] | | Coding isn’t creative, it isn’t sexy, and almost nobody outside this bubble cares Most of the world doesn’t care about “good code.”
They care about “does it work, is it fast enough, is it cheap enough, and can we ship it before the competitor does?” Beautiful architecture, perfect tests, elegant abstractions — those things feel deeply rewarding to the person who wrote them, but they’re invisible to users, to executives, and, let’s be honest, to the dating market. Being able to refactor a monolith into pristine microservices will not make you more attractive on a date. What might is the salary that comes with the title “Senior Engineer at FAANG.”
In that sense, many women (not all, but enough) relate to programmers the same way middle managers and VCs do: they’re perfectly happy to extract the economic value you produce while remaining indifferent to the craft itself. The code isn’t the turn-on; the direct deposit is. That’s brutal to hear if you’ve spent years telling yourself that your intellectual passion is inherently admirable or sexy. It’s not. Outside our tribe it’s just a means to an end — same as accounting, law, or plumbing, just with worse dress code and better catering. So when AI starts eating the parts of the job we insisted were “creative” and “irreplaceable,” the threat feels existential because the last remaining moat — the romantic story we told ourselves about why this profession is special — collapses. Turns out the scarcity was mostly the paycheck, not the poetry. I’m not saying the work is meaningless or that system design and taste don’t matter. I’m saying we should stop pretending the act of writing software is inherently sexier or more artistically noble than any other high-paying skilled trade. It never was. |
|
|
| ▲ | stego-tech 3 days ago | parent | prev | next [-] |
| I'm right there with you, and it's been my core gripe since ChatGPT burst onto the stage. Believe it or not, my environmental concerns came about a year later, once we had data on how datacenters were being built and their resource consumption rates; I had no idea how big things had very suddenly and violently exploded into, and that alone gave me serious pause about where things are going. In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems. It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question: Is all of that worth the harm I'm inflicting on others? |
| |
| ▲ | simianwords 3 days ago | parent [-] | | Some person asked this same question about computers back in the day. | | |
| ▲ | stego-tech 3 days ago | parent [-] | | The fact the question has been asked before does not make it any less valuable or worthwhile to ask now, and history is full of the sort of pithy replies like yours masquerading as profound philosophical insights. I’d like to think the question is asked at every invention, every revolution, because we must doubt our own creations lest we blind ourselves to the consequences of our actions. Nothing is inevitable. Systems can be changed if we decide to do so, and AI is no different. To believe in inevitability is to embrace fatalism. |
|
|
|
| ▲ | some-guy 3 days ago | parent | prev | next [-] |
| There are a few areas where I have found LLMs to be useful (anything related to writing code, as a search engine) and then just downright evil and upsetting in every other instance of using it, especially as a replacement for human creativity and personal expression. |
|
| ▲ | Night_Thastus 3 days ago | parent | prev | next [-] |
| Don't worry that much about 'AI' specifically. LLMs are an impressive piece of technology, but at the end of the day they're just language predictors - and bad ones a lot of the time. They can reassemble and remix what's already been written but with no understanding of it. It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will. But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else. |
| |
| ▲ | DevDesmond 3 days ago | parent [-] | | Text is an LLMs input and output, but, under the hood, the transformer network is capable of far more than mere re-assembly and remix of text. Transformers can approximate turing completeness as their size scales, and they can encode entire algorithms in their weights. Therefore, I'd argue they can do far more than reassemble and remix. These aren't just Markov models anymore. (I'd also argue that "understanding" and "functional brain" are unfalsifiable comparisons. What exactly distinguishes a functional brain from a turing machine? Chess once required a functional brain to play, but has now been surpassed by computation. Saying "jobs that require a human brain" is tautological without any further distinction). Of course, LLMs are definitely missing plenty of brain skills like working in continuous time, with persistent state, with agency, in physical space, etc. But to say that an LLM "never will" is either semantic, (you might call it something other than an LLM when next generation capabilities are integrated), tautological (once it can do a human job, it's no longer a job that requires a human), or anthropocentric hubris. That said, who knows what the time scale looks like for realizing such improvements – (decades, centuries, millennia). |
|
|
| ▲ | mrdependable 3 days ago | parent | prev | next [-] |
| What I don't understand is, will every company really want to be beholden to some AI provider? If they get rid of the workers, all of a sudden they are on the losing end of the bargaining table. They have incredible leverage as things stand. |
| |
| ▲ | spjt a day ago | parent [-] | | Yeah if they thought unions were bad, they really won't like dealing with another company larger than them. |
|
|
| ▲ | oytis 2 days ago | parent | prev | next [-] |
| I dunno, I might be getting old, but I think the idea that people absolutely need a job to stay sane betrays lack of imagination. Of course getting paid just enough for survival is pretty depressing, but if I can have healthy food, a spacious place to live, ability to travel and all the free time I can have, I'd be absolutely happy without a job. Maybe I'd be even writing code, just not commercially useful one. |
| |
| ▲ | rurp 2 days ago | parent [-] | | I don't think this is the scenario most people are worried about. Having basic needs met while also having a lot of freedom and time probably sounds great to the majority of people. But there's roughly 0% chance we end up in that kind of world if current AI leads to massive job elimination. Just look at who is building, funding, and promoting these models! I can't think of a group of people less interested in helping millions of plebs lead higher quality lives if it costs them a penny to do it. | | |
| ▲ | oytis a day ago | parent [-] | | Yeah, I get it, but I still hear the argument a lot, including in this article, that even if our needs are covered, we still need jobs for MEANING. Not sure where all those people work, I should probably envy this guy for finding work at an investment fund so satisfying |
|
|
|
| ▲ | Joel_Mckay 3 days ago | parent | prev | next [-] |
| LLM slop doesn't have aspirations at all, its just click bait nonsense. https://www.youtube.com/watch?v=_zfN9wnPvU0 Drives people insane: https://www.youtube.com/watch?v=yftBiNu0ZNU And LLM are economically and technologically unsustainable: https://www.youtube.com/watch?v=t-8TDOFqkQA These have already proven it will be unconstrained if AGI ever emerges. https://www.youtube.com/watch?v=Xx4Tpsk_fnM The LLM bubble will pass, as it is already losing money with every new user. =3 |
|
| ▲ | asdff 3 days ago | parent | prev | next [-] |
| I think it just reflects on the sort of businesses that these companies are vs others. Of course we worry about this in the context of companies that dehumanize us, reduce us to line item costs and seek to eliminate us. Now imagine a different sort of company. A little shop where the owner's first priority is actually to create good jobs for their employees that afford a high quality life. A shop like that needn't worry about AI. It is too bad that we put so much stock as a society in businesses operating in this dehumanizing capacity instead of ones that are much more like a family unit trying to provide for each other. |
|
| ▲ | classified a day ago | parent | prev | next [-] |
| > artificial creativity This artificial creativity will only go so far, because it's a simulated semblance of human creativity, as much as could be gathered from training data. If not continually refueled by new training data, it will run out sooner or later. And then it will get boring really quickly. |
| |
| ▲ | spjt a day ago | parent [-] | | But it is being continually refueled. The output of an LLM, at least in the process of generating code, is a combined product of human creativity and the LLMl. I have told it what to do, fixed what it got wrong, and verified the solution was correct through testing. |
|
|
| ▲ | 0manrho 3 days ago | parent | prev [-] |
| Regarding that PS: > This strikes me as paradoxical given my sense that one of AI’s main impacts will be to increase productivity and thus eliminate jobs. The allegation that an "Increase of productivity will reduce jobs" has been proven false by history over and over again it's so well known it has a name, "Jevons Paradox" or "Jevons Effect"[0]. > In economics, the Jevons paradox (sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use [...] results in overall demand increasing, causing total resource consumption to rise. The "increase in productivity" does not inherently result in less jobs, that's a false equivalence. It's likely just as false as it was in 1915 with the the assembly line and the Model T as it is in 2025 with AI and ChatGPT. This notion persists because as we go through inflection points due to something new changing up market dynamics, there is often a GROSS loss (as in economics) of jobs that often precipitates a NET gain overall as the market adapts, but that's not much comfort to people that lost or are worried about losing their jobs due to that inflection point changing the market. The two important questions in that context for individuals in the job market during those inflections points (like today) are: "how difficult is it to adapt (to either not lose a job, or to benefit from or be a part of that net gain)?" and "Should you adapt?" Afterall, the skillsets that the market demands and the skillsets it supplies are not objectively quantifiable things; the presence of speculative markets is proof that this is subjective, not objective. Anyone who's ever been involved in the hiring process knows just how subjective this is. Which leads me to: > the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome. Disagree that that's what the promise about. That IS happening, I don't disagree there, but that's not the promise that corporate is so hyped about. If we're being honest and not trying to blow smoke up people's ass to artificially inflate "value," AI is fundamentally about being more OBJECTIVE than SUBJECTIVE with regard to costs and resources of labor, and it's outputs. Anyone who knows what OKR's are and has been subject to a "performance review" in a self professed "Data driven company" knows how much modern corporate America, especially the tech market, loves it's "quantifiables." It's less about how much better it can allegedly do something, but the promise of how much "better" it can be quantified vs human labor. As long as AI has at least SOME proven utility (which it does), this promise of quantifiables combined with it's other inherent potential benefits (Doesn't need time off, doesn't sleep, doesn't need retirement/health benefits, no overtime pay, no regulatory limitations on hours worked, no "minimum wage") means that so long as the monied interests perceive it as continuing to improve, then they can dismiss it's inefficiencies/ineffectiveness in X or Y by the promise of it's potential to overcome that eventually. It's the fundamental reason why people are so concerned about AI replacing Humans. Especially when you consider one of the things that AI excels at is quickly delivering an answer with confidence (people are impressed with speed and a sucker for confidence), and another big strength is it's ability to deal with repetitive minutia in known and solved problem spaces(a mainstay of many office jobs). It can also bullshit with best of them, fluff your ego as much as you want (and even when you don't), and almost never says "No" or "You're wrong" unless you ask it to. In other words, it excels at the performative and repetitive bullshit and blowing smoke up your boss' ass and empowers them to do the same for their boss further up the chain, all while never once ruffling HR's feathers. Again, it has other, much more practical and pragmatic utility too, it's not JUST a bullshit oracle, but it IS a good bullshit oracle if you want it to be. 0: https://en.wikipedia.org/wiki/Jevons_paradox |
| |
| ▲ | harimau777 3 days ago | parent [-] | | If that's the case, then why do we live in this late capitalist hell hole? Any technology that gets developed will be used for its worst, most dehumanizing purpose possible. That's just the reality of the shity society we live in. | | |
| ▲ | munksbeer a day ago | parent | next [-] | | Do you know that there are groups of people around the world who feel similar to you and choose to go and live in smaller communities abstaining from the trappings of the modern world? They live in self built houses, have wells/streams for their own water, grow their own food. I don't believe they're entirely self sufficient or insulated from the outside world, but they're close. I don't understand why people who seem to hate the modern world so much continue to live in it, and complain on the internet, when they have the option to live differently. | |
| ▲ | 0manrho 2 days ago | parent | prev [-] | | You're a cheerful one, aren't you? All it takes for evil to persevere is good people to sit by and do nothing. Don't like the situation you're in, do something about it. Preferably other than doomscrolling, but hey, you do you. |
|
|