Remix.run Logo
JimmaDaRustla 12 hours ago

The author seems to imply that the "framing" of an argument is done so in bad faith in order to win an argument but only provides one-line quotes where there is no contextual argument.

This tactic by the author is a straw-man argument - he's framing the position of tech leaders and our acceptance of it as the reason AI exists, instead of being honest, which is that they were simply right in their predictions: AI was inevitable.

The IT industry is full of pride and arrogance. We deny the power of AI and LLMs. I think that's fair, I welcome the pushback. But the real word the IT crowd needs to learn is "denialism" - if you still don't see how LLMs is changing our entire industry, you haven't been paying attention.

Edit: Lots of denialists using false dichotomy arguments that my opinion is invalid because I'm not producing examples and proof. I guess I'll just leave this: https://tools.simonwillison.net/

jdiff 12 hours ago | parent | next [-]

The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility. AI could solve a lengthy list of outstanding architectural and technical problems and go full AGI next year.

If we're talking about changing the industry, we should see some clear evidence of that. But despite extensive searching myself and after asking many proponents (feel free to jump in here), I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI. If this is so foundationally groundbreaking, that should be a clear signal. Personally, I would expect to see an explosion of this even if the hype is taken extremely conservatively. But I can't even track down a few solid examples. So far my searching only reveals one-off pull requests that had to be laboriously massaged into acceptability.

bsenftner 12 hours ago | parent | next [-]

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI.

That's because using AI to write code is a poor application of LLM AIs. LLMs are better suited to summary, advice, and reflection than forced into a Rube Goldberg Machine. Use your favorite LLM as a Socratic advisor, but not as a coder, and certainly not as an unreliable worker.

eddythompson80 12 hours ago | parent | next [-]

The entire hype for LLMs is that they can *do* anything. Even if only writing code, that could justify their hype. If LLMs mean Grammarly is now a lot better (and offered by big tech) then it’ll be very disappointing (economically speaking)

bsenftner 8 hours ago | parent [-]

I believe it was Flavor Flave that said: "Don't believe the hype", and the pretty much applies to everything humans do.

robwwilliams 12 hours ago | parent | prev | next [-]

I support this comment. AI for coding does still involve much prodding and redirecting in my limited experience. Try getting Claude to produce even a simple SVG for a paper is a struggle in my experience.

But for helping me as a partner in neurophilosophy conversations Claude is unrivaled even compared to my neurophilosophy colleagues—speed and the responsivness is impossible to beat. LLMs are at pushing me to think harder. They provides the wall against which to bounce ideas, and those bounces often come from surprising and helpful angles.

Arch-TK 12 hours ago | parent | prev | next [-]

It's absolutely hilarious reading all these "you're holding it wrong" arguments because every time I find one it contradicts the previous ones.

usrbinbash 11 hours ago | parent | prev | next [-]

> That's because using AI to write code is a poor application of LLM AIs

Then why is that exact usecase being talked about ad nauseam by many many many "influencers", including "big names" in the industry? Why is that exact usecase then advertised by leading companies in the industry?

bsenftner 8 hours ago | parent [-]

It probably has a lot to do with those writing the code are not those marketing the finished software, and between the two groups they do not communicate very well. Once marketing gets any idea momentum going, they will go for it, because they can sell it, and then make engineering pay that bill. Not their problem.

JimmaDaRustla 11 hours ago | parent | prev | next [-]

Agreed. That argument they made was a straw-man which doesn't really pertain to where LLMs are being leveraged today.

jdiff 10 hours ago | parent [-]

The claim that you made was "if you still don't see how LLMs is changing [the IT] industry, you haven't been paying attention." Pointing out that there is no visible evidence of that change in the industry you mentioned and inviting you to provide some where others have repeatedly failed is not attacking a strawman.

komali2 12 hours ago | parent | prev [-]

> Use your favorite LLM as a Socratic advisor

Can you give an example of what you mean by this?

hyperadvanced 12 hours ago | parent [-]

You read the book and have the llm ask you questions to help deepen your understanding, e.g.

aydyn 11 hours ago | parent [-]

Or you dont read the book at all and ask the llm to give you the salient points?

noah_buddy 11 hours ago | parent | next [-]

Socratic method usually refers to a questioning process, which is what the poster above is getting at in their terminology. Imo

teg4n_ 11 hours ago | parent | prev [-]

And cross your fingers it didn’t make them up?

aydyn 11 hours ago | parent [-]

Yes. If it can get me through 10 books in the same time it takes you to get through 1 I am fine with an extra 1% error rate or whatever.

jdiff 10 hours ago | parent [-]

If I spend an afternoon on CliffNotes, I haven't read a hundred books in a day. This isn't one weird trick to accelerate your reading, it's entirely missing the point. If any book could be summarized in a few points, there would be no point to writing anything more than a BuzzFeed listicle.

aydyn 10 hours ago | parent [-]

Lots of books CAN be summarized in a few points.

komali2 2 hours ago | parent [-]

Yes, that's true, but bullet point ideology isn't very useful imo.

For example you can easily sum up "How to Win Friends and Influence People" into a few short recommendations. "Empathize with people, listen closely to people, let them do most of the talking, be quick to be upfront and honest with your faults." But it gets tenuous and some of the exact meaning is lost, and the bullet points aren't very convincing on their own of the effectiveness of the recommended methods.

The book fleshes out the advice with clarifications as well as stories from Carnegie's life of times he used the techniques and their results.

I think humans are good at remembering stories. For example I always remember the story about him letting his dog off the leash in a dog park and how he leveraged his technique for getting off the hook when a cop confronted him about it. I think I remember that better than the single bullet point advice of "readily and quickly admit when you're wrong." In fact I think the end of each chapter succinctly states the exact tip and I'm sure I'm misstating it here.

I could use anki to memorize a summary but I don't think I'd be able to as effectively incorporate the techniques into my behavior without the working examples, stories, and evidence he provides.

And that's just non fiction. I can't fathom the point of summarizing a fiction book, whose entire enjoyment comes from the reading of it.

pama 11 hours ago | parent | prev | next [-]

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI

This popular repo (35.6k stars) documents the fraction of code written by LLM for each release since about a year ago. The vast majority of releases since version 0.47 (now at 0.85) had the majority of their code written by LLM (average code written by aider per release since then is about 65%.)

https://github.com/Aider-AI/aider

https://github.com/Aider-AI/aider/releases

KerrAvon 11 hours ago | parent [-]

I think we need to move the goalposts to "unrelated to/not in service of AI tooling" to escape easy mode. Replace some core unix command-line tool with something entirely vibecoded. Nightmare level: do a Linux graphics or networking driver (in either Rust or C).

hmry 10 hours ago | parent [-]

Yes, I agree. The same way that when you ask "Are there any production codebases written in Language X", you typically mean "excluding the Language X compiler & tooling itself." Because of course everyone writing a tool loves bootstrapping and dogfooding, but it doesn't tell you anything about production-readiness or usefulness / fitness-for-purpose.

belter 12 hours ago | parent | prev | next [-]

> If we're talking about changing the industry, we should see some clear evidence of that.

That’s a great point...and completely incompatible with my pitch deck. I’m trying to raise a $2B seed round on vibes, buzzwords, and a slightly fine-tuned GPT-3.5.. You are seriously jeopardizing my path to an absurdly oversized yacht :-))

BoiledCabbage 11 hours ago | parent | prev | next [-]

> The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility.

The iPhone and subsequent growth of mobile (and the associated growth of social media which is really only possible in is current form with ubiquitous mobile computing) are evidence it did change everything. Society has been reshaped by mobile/iPhone and its consequences.

NFTs were never anything, and there was never an argument they were. The were a financial speculative item, and it was clear all the hype was due to greater fools and FOMO. To equate those two is silly. That's like arguingsome movie blockbuster like Avengers Endgame was going to "change everything" because it was talked about and advertised. It was always just a single piece of entertainment.

Finally for LLMs, a better comparison for them would have been the 80's AI winter. The question should be "why will this time not be like then?" And the answer is simple: If LLMs and generative AI models never improve an ounce - If they never solve another problem, nor get more efficient, nor get cheaper - they will still drastically change society because they are already good enough today. They are doing so now.

Advertising, software engineering, video making. The tech is already for enough that it is changing all of these fields. The only thing happening now is the time it takes for idea diffusion. People learning new things and applying it are the slow part of the loop.

You could have made your argument pre-chatgpt, and possibly could have made that argument in the window of the following year or two, but at this point the tech at the level to change society exists, it just needs time to spread. All it need are two things: tech stays the same, prices roughly stay the same. (No improvements required)

Now there still is a perfectly valid argument to make against the more extreme claims we hear of: all work being replaced..., and that stuff. And I'm as poorly equipped to predict that future as you (or anyone else) so won't weigh in - but that's not the bar for huge societal change.

The tech is already bigger than the iPhone. I think it is equivalent to social media, (mainly because I think most people still really underestimate how enormous the long term impact of social media will be in society: Politics, mental health, extremism, addiction. All things they existed before but now are "frictionless" to obtain. But that's for some other post...).

The question in my mind is will it be as impactful as the internet? But it doesn't have to be. Anything between social media and internet level of impact is society changing. And the tech today is already there, it just needs time to diffuse into society.

You're looking at Facebook after introducing the algorithm for engagement. It doesn't matter that society wasn't different overnight, the groundwork had been laid.

hobs 12 hours ago | parent | prev | next [-]

One part of the code generation tools is that they devalue code at the same time as produce low quality code (without a lot of human intervention.)

So a project that mostly is maintained by people who care about their problem/code (OSS) would be weird to be "primarily maintained by AI" in a group setting in this stage of the game.

jdiff 10 hours ago | parent [-]

Exactly the problem. It doesn't need to be good enough to work unsupervised in order to gain real adoption. It just needs to be a performance or productivity boost while supervised. It just needs to be able to take an AI-friendly FOSS dev (there are many), and speed them along their way. If we don't have even that, then where is the value (to this use case) that everyone claims it has? How is this going to shake the foundations of the IT industry?

hobs 6 hours ago | parent [-]

Because convincing the owners of the house they have a shaky foundation and you have a cheap fix matters more than the actual integrity and the fix.

There's no question that the predictions around LLMs are shaking up the industry - see mass layoffs and offers for 8 figures to individual contributors. The question is will it materially change things for the better? no idea.

jdiff 5 hours ago | parent [-]

Do you have any more info on the 8 figures? I hadn't come across that, but that's quite interesting to hear.

For the mass layoffs, I was under the belief that those were largely driven by the tax code alterations in the US.

hobs 5 hours ago | parent [-]

https://www.theregister.com/2025/06/13/meta_offers_10m_ai_re... Nah, big companies don't even care about that very much, they have a million tax dodges, its the smaller startups that are deeply impacted by that type of change.

runarberg 12 hours ago | parent | prev | next [-]

This has been researched, and while the existing research is young and inconclusive, the outlook is not so good for the AI industry, or rather for the utility of their product, and the negative effects it has on their users.

https://news.ycombinator.com/item?id=44522772

JimmaDaRustla 12 hours ago | parent | prev | next [-]

> LLMs could hit an unsolvably hard wall next year and settle into a niche of utility

LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI

Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

> If this is so foundationally groundbreaking, that should be a clear signal.

As I said, you haven't been paying attention.

Denialism - the practice of denying the existence, truth, or validity of something despite proof or strong evidence that it is real, true, or valid

NilMostChill 11 hours ago | parent | next [-]

> LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.

That is an exaggeration, it is integrated into some workflows, usually in a provisional manner while the full implications of such integrations are assessed for viability in the mid to long term.

At least in the fields of which i have first hand knowledge.

> Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

Straw man rebuttal, presenting an imaginary position in which this statement is doesn't apply doesn't invalidate the statement as a whole.

> As I said, you haven't been paying attention.

Or alternatively you've been paying attention to a selective subset of your specific industry and have made wide extrapolations based on that.

> Denialism - the practice of denying the existence, truth, or validity of something despite proof or strong evidence that it is real, true, or valid

What's the one where you claim strong proof or evidence while only providing anecdotal "trust me bro" ?

jdiff 10 hours ago | parent | prev | next [-]

> LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.

Having a niche is different from being niche. I also strongly believe you overstate how integrated they are.

> Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

As mentioned, I have looked. I told you what I found when I looked. And I've invited others to look. I also invited you. This is not a straw man argument, it's making a prediction to test a hypothesis and collecting evidence. I know I am not all seeing, which is why I welcome you to direct my eyes. With how strong your claims and convictions are, it should be easy.

Again: You claim that AI is such a productivity boost that it will rock the IT industry to its foundations. We cannot cast our gaze on closed source code, but there are many open source devs who are AI-friendly. If AI truly is a productivity boost, some of them should be maintaining widely-used production code in order to take advantage of that.

If you're too busy to do anything but discuss, I would instead invite you to point out where my reasoning goes so horrendously off track that such examples are apparently so difficult to locate, not just for me, but for others. If one existed, I would additionally expect that it would be held up as an example and become widely known for it with as often as this question gets asked. But the world's full of unexpected complexities, if there's something that's holding AI back from seeing adoption reflected in the way I predict, that's also interesting and worth discussion.

dingnuts 11 hours ago | parent | prev [-]

> Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

The money and the burden of proof are on the side of the pushers. If LLM code is as good as you say it is, we won't be able to tell that it's merged. So, you need to show us lots of examples of real world LLM code that we know is generated, a priori, to compare

So far most of us have seen ONE example, and it was that OAuth experiment from Cloudflare. Do you have more examples? Who pays your bills?

cdelsolar 11 hours ago | parent [-]

What are you talking about? I have multiple open-source projects where I've generated multiple PRs with 90+% AI tools. I don't care that the code isn't as good, because I have people using these features and the features work.

1) https://github.com/domino14/Webolith/pull/523/files (Yes, the CSS file sucks. I tried multiple times to add dark mode to this legacy app and I wasn't able to. This works, and is fine, and people are using it, and I'm not going to touch it again for a while)

2) https://github.com/domino14/macondo/pull/399 - A neural net for playing Scrabble. Has not been done before, in at least an open-source way, and this is a full-fledged CNN built using techniques from Alpha Zero, and almost entirely generated by ChatGPT o3. I have no idea how to do it myself. I've gotten the net to win 52.6% of its games against a purely static bot, which is a big edge (trust me) and it will continue to increase as I train it on better data. And that is before I use it as an actual evaluator for a Monte Carlo bot.

I would _never_ have been able to put this together in 1-2 weeks when I am still working during the day. I would have had to take NN classes / read books / try many different network topologies and probably fail and give up. Would have taken months of full-time work.

3) https://github.com/woogles-io/liwords/pull/1498/files - simple, but one of many bug fixes that was diagnosed and fixed largely by an AI model.

ModernMech 9 hours ago | parent [-]

I think this is what the original poster means. The value proposition isn't "As a developer, AI will allow you to unlock powers you didn't have before and make your life easier". They're selling it as "AI can do you job."

We are being sold this idea that AI is able to replace developers, wholesale. But where are the examples? Seemingly, every example proffered is "Here's my personal project that I've been building with AI code assistants". But where are the projects built by AI developers (i.e. not people developers)? If AI was as good as they say, there should be some evidence of AI being able to build projects like this.

ToucanLoucan 11 hours ago | parent | prev | next [-]

> The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility.

Not only is that a could, I'd argue they already are. The huge new "premier" models are barely any better than the big ticket ones that really kicked the hype into overdrive.

* Using them as a rubber duck that provides suggestions back for IT problems and coding is huge, I will fully cosign that, but it is not even remotely worth what OpenAI is valued at or would need to charge for it to make it profitable, let alone pay off it's catastrophic debt. Meanwhile every other application is a hard meh.

* The AI generated video ads just look like shit and I'm sorry, call me a luddite if you will, but I just think objectively less of companies that leverage AI video/voices/writing in their advertisements. It looks cheap, in the same way dollar store products have generic, crappy packaging, and makes me less willing to open my wallet. That said I won't be shocked at all if that sticks around and bolsters valuations, because tons of companies worldwide have been racing to the bottom for decades now.

* My employer has had a hard NO AI policy for both vetting candidates and communicating with them for our human resources contracting and we've fired one who wouldn't comply. It just doesn't work, we can tell when they're using bots to review resumes because applicants get notably, measurably worse.

LLMs are powerful tools that have a place, but there is no fucking UNIVERSE where they are the next iPhone that silicon valley is utterly desperate for. They just aren't.

JimmaDaRustla 12 hours ago | parent | prev | next [-]

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI.

As I stated, you haven't been paying attention.

mcherm 12 hours ago | parent | next [-]

A better-faith response would be to point out an example of such an open source codebase OR tell why that specific set of restrictions (open-source, active production, primarily AI) is unrealistic.

For instance, one might point out that the tools for really GOOD AI code authoring have only been available for about 6 months so it is unreasonable to expect that a new project built primarily using such tools has already reached the level of maturity to be relied on in production.

JimmaDaRustla 11 hours ago | parent | next [-]

I don't have time to handhold the ignorant.

I do however have time to put forth my arguments now that I use LLMs to make my job easier - if it weren't for them, I wouldn't be here right now.

dminik 11 hours ago | parent | next [-]

One would think that with all of the time AI is clearly saving you, you could spare some of it for us uneducated peasants.

eddythompson80 2 hours ago | parent [-]

They are clearly busy dunking on us all.

eddythompson80 11 hours ago | parent | prev [-]

You don’t have time to post a link with an example. You have time to post a wall of text instead.

JimmaDaRustla 11 hours ago | parent [-]

My code isn't open source.

eddythompson80 11 hours ago | parent [-]

Checkmate

JimmaDaRustla 11 hours ago | parent [-]

You didn't checkmate anything.

You're perfectly capable of looking at the world around you. You're arguing in bad faith using a false dichotomy that I must be able to produce examples or my argument is not valid. You're trying to suck all the air out of the room and waste time.

https://tools.simonwillison.net/

ChECk MaTee

slacktivism123 11 hours ago | parent [-]

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

https://news.ycombinator.com/newsguidelines.html

Aside from that, I don't see how the collection of simple one-shot JavaScript wrappers (like "Extract URLs", "Word Counter", and "Pomodoro Timer") that you keep bringing up is related to your argument.

eddythompson80 11 hours ago | parent | prev [-]

“you haven’t been listening. It’s inevitable to happen”

mashygpig 12 hours ago | parent | prev | next [-]

I don’t find it fair that you point out straw man in your parent comment and then use ad hominem in this comment. I would love to see you post some examples. I think you’d have a chance of persuading several readers to at least be more open minded.

hinkley 12 hours ago | parent | prev | next [-]

> But despite extensive searching myself and after asking many proponents

12 hours ago | parent | prev | next [-]
[deleted]
nkrisc 12 hours ago | parent | prev | next [-]

So… which ones?

JimmaDaRustla 11 hours ago | parent [-]

Mine, it's how I have time to argue with the denialists right now.

eddythompson80 11 hours ago | parent [-]

Nice sales pitch.

belter 12 hours ago | parent | prev [-]

At least we know you are human, since you are gaslighting us instead of citing a random link, that leads to a 404 page. An LLM would have confidently hallucinated a broken reference by now.

Cthulhu_ 11 hours ago | parent [-]

Not necessarily, one aspect of the LLM arms race is to have the most up-to-date records or to use a search engine to find stuff.

jdiff 10 hours ago | parent [-]

If the LLM thinks to consult a search engine. If the next token predicted is the start of a link rather than the start of a tool call, it's going to be spitting out a link. Getting them to reliably use tools rather than freeball seems to be quite a difficult problem to solve.

4star3star 12 hours ago | parent | prev [-]

At the heart of it all is language. Logic gates to assembly to high level programming languages are a progression of turning human language into computed processes. LLMs need to be tuned to recognize ambiguity of intention in human language instructions, following up with clarifying questions. Perhaps quantum computing will facilitate the process, the AI holding many fuzzy possibilities simultaneously, seeking to "collapse" them into discrete pathways by asking for more input from a human.

philipwhiuk 12 hours ago | parent | prev | next [-]

> AI was inevitable.

This is post hoc ergo propter hoc. AI exists thus it must have been inevitable.

You have no proof it was inevitable.

(Also AI means something wildly different than it meant a few years ago - I remember when AI meant AGI, the salesmen have persuaded you the emperor has clothes because they solved a single compelling test).

srveale 12 hours ago | parent | next [-]

I keep seeing the "AI of the gaps" argument, where AI is whatever computers currently can't do. I wonder when I'll stop seeing it.

dinfinity 11 hours ago | parent | prev | next [-]

> This is post hoc ergo propter hoc. AI exists thus it must have been inevitable.

The problem with that statement is that it doesn't say anything about why AI will take over pretty much everything.

The actual answer to that is that AI is not limited by a biological substrate and can thus:

1. Harness (close to) the speed of light for internal signals; Biology is limited to about 200m/s, 6 orders of magnitude less. AI has no such limitations.

2. Scale very easily. Human brains are limited in how big they can get due to silly things such as the width of the birth canal and being on top of a (bipedal) body that uses organic mass to inefficiently generate power. Scaling a human brain beyond its current size and the ~20 watts it draws is an incredibly hard engineering challenge. For AI scaling is trivial by comparison.

I'm not saying it's going to be LLMs, but longterm we can say that the intelligent entities that will surpass us will not have the same biological and physical limitations as we do. That means they very, very probably have to be 'artificial' and thus, that AI taking over everything is 'inevitable'.

Miraltar 12 hours ago | parent | prev | next [-]

Well, a few years ago I was a student in CS and my formation had the AI label stamped on it. We talked about machine learning, neural network and stuff like that and we called that AI. There was never a mention of AGI. I don't know if it's a translation thing but AI = AGI never was a thing for me. As long as there is no clear definition for it people will keep on arguing because we each have our own blurry picture.

charles_f 10 hours ago | parent | prev | next [-]

> I remember when AI meant AGI

Interestingly I had the same definition, and at the same time there's always been multiple definitions. I have always called whatever animated NPC in games "AI", even when the thing is hard coded and not very intelligent at all. I guess that calling AI a category of tools that are artificial and somewhat intelligent is fair.

I also anticipate that what we call AGI will be fluid, and that marketing being marketing, we'll start calling actual products AGI before it would be genuine.

SketchySeaBeast 12 hours ago | parent | prev [-]

Good aside - this is really the hoverboard rebranding of the 2020s.

conartist6 12 hours ago | parent | prev | next [-]

It is probably both inevitable that the LLM technology we have now would be invented and inevitable that there would be a major pushback against it. In any world, this would be a technology that takes from some to give to others.

Given that, nothing about the future seems inevitable to me. The law isn't settled. Public opinion isn't settled. Even a great deal of the hype keeping the bubble from popping is still founded on talk of AGI that I now consider absurd...

tkiolp4 12 hours ago | parent | prev | next [-]

The problem is that such “tech leaders” get their mouths full of AI with one goal only: to reduce their workforce to the minimum and maximize their profits. Sure, they are companies and yada yada, but I would like to see a better argument on to why we all should embrace AI. So far, as much as AI is intrinsically amazing, it’s getting bad rep because its main and more lousy supporters are tech billionaires.

spyckie2 10 hours ago | parent | prev | next [-]

I think the author is encouraging out of the box thinking. The framing of "inevitable" is a box (an assumption) that we can include in our analysis or judgment, rather than assume it to be true.

But to your point, his debate analogy does imply that tech enthusiasts are arguing in bad faith in order to win an argument, because the goal of winning a debate has no good faith aspect to it (good faith in debate terms is seeking the truth, bad faith is winning an argument).

But just because he is wrong doesn't mean he isn't useful.

perching_aix 11 hours ago | parent | prev | next [-]

> The author seems to imply that the "framing" of an argument is done so in bad faith in order to win an argument (...) This tactic by the author is a straw-man argument

This is what I was expecting from the title, but not really what I found in the content in the end. Instead, to me it read to be more about argumentation and inevitibilism in general, than about LLMs specifically. From my perspective, to claim and ride it otherwise rings as mischaracterization.

... Which is also an acknowledgement I missed from the article. The use of inevitability as a framing device is just one of the many forms of loaded language, and of the encoding of shared assumptions without preestablishing that the other person actually shares them. Notice how I didn't say that you're mischaracterizing the article outright - we clearly read what was written differently. To assert my interpretation as correct by encoding it as framing would be pretty nasty. Sadly not uncommon though, and it's one of those things that if you try to actually control for, writing in a digestable way in general becomes very hard to impossible.

overgard 11 hours ago | parent | prev | next [-]

So far what I've seen from LLMs writing code is insecure bug ridden slop. They are changing the industry in that now I have to code review messes from developers and non developers being careless. AI image and video generation isn't much better.

CEOs and investors love to talk about how "scary" AI is and publicly advocate for regulation (while privately shutting it down) because they NEED the money to keep flowing, because these things aren't profitable. Inevitabalism is a very accurate description of their PR campaign and it's sadly working, for the moment.

bwfan123 11 hours ago | parent | prev | next [-]

The "framing" is a tactic called "assuming the sale" - where statements are made as-if they are already true and the burden is placed on the other side to negate. Combine that with other tactics like fomo, scarcity, and authority and you will have people practically begging for these tools. As an example..

"Edison of our times Elon Musk (authority) believes that the AI agents are the future (assuming the sale), and most developers are already using it to improve productivity (fomo, social-proof). MCP agents are in short supply due to tariff driven bottlenecks, so buy them while supplies last (scarcity)".

This sort of influencing is accelerated by social-media, and is all around us, and short-circuits critical-thinking in most of us.

lupusreal 11 hours ago | parent | prev | next [-]

Reading between the lines of the OP, the author seems to think that the future of LLMs will be determined by debate and that he can win that debate by choosing the framing of the debate.

The whole meat of his article is about this debate technique, ostensibly saying that's what the other guys are doing, but really he's only described what he himself is doing.

charles_f 10 hours ago | parent [-]

I didn't read that. I understood it as the fact that tech companies are currently framing the narrative as "inevitable", and that you should ask yourself the other questions, such as do I want it

lupusreal 9 hours ago | parent [-]

The question of whether any individual wants it ultimately won't matter. Now that the technology exists and has found traction, it continuing to exist and have traction until eventually being superseded by something even more useful is inevitable.

The author seems to think that the existence of the technology can be decided by debate to sway people one way or the other, but that's not how it works. Real life doesn't work like a debate club. The people who are saying that the technology is inevitable aren't trying to do a debate persuasion thing to make it inevitable, that's just the way the author wants to see it because that framing makes it negotiable. But there's no negotiating with the course of technological development.

raincole 11 hours ago | parent | prev | next [-]

The usual argument is that the tech leaders are saying that only because they've invested in AI.

...Like, you don't say? If one truly believes AI is inevitable than of course they're going to put money in AI.

I wonder how many people who claim we're in an AI bubble actually short AI stocks and $NVDA. Or they'd just stutter "uh you know the market can stay irrational longer than we can stay solvent..." when asked.

tartoran 10 hours ago | parent [-]

There's no doubt that LLMs are useful and will generate some productivity. The hype raises them to the silver bullet tech though. This inevitably creates a bubble that it will pop at some point. People who see signs of bubble don't short because they are lacking on details as to when exactly this will happen.

belter 12 hours ago | parent | prev | next [-]

> LLMs is changing our entire industry,

- So far, the only ones making real money are the "shovel sellers": Nvidia, AWS, and the GPU resale hustlers. Everyone else is still trying to figure out how to turn the parrots into profit.

- Probabilistic code generators are not the dawn of a new scientific era that will propel us to the Stars. Just autocomplete on steroids, impressive, but not what will launch humanity into post-scarcity.

- So far what you have is a glorified compression algorithm. A remix of Reddit, StackOverflow, and Wikipedia...With the confidence of a TED speaker and the understanding of a parrot.

- If LLMs are truly the road to AGI, try sending one to MIT. No internet, no textbook ingestion, no Leetcode prep. Just cold start intelligence. If it graduates...we might have something....

Right now, this is just confusing correlation for cognition. Compression for comprehension. And mimicry for mastery. The revolution may come, but not the next four quarters. What it is bad news if you are VC....or Mark Zuckerberg...

JimmaDaRustla 12 hours ago | parent [-]

> you haven't been paying attention.

belter 12 hours ago | parent [-]

Still no falsifiable claim in sight....

qotgalaxy 12 hours ago | parent | prev | next [-]

[dead]

motorest 12 hours ago | parent | prev [-]

> But the real word the IT crowd needs to learn is "denialism" - if you still don't see how LLMs is changing our entire industry, you haven't been paying attention.

The best part about this issue is that it's a self correcting problem. Those who don't are risking being pushed out of the job market, whereas those who do will fare better odds.

I'm sure luddites also argued no one needed a damn machine to weave a rug, and machine-weaved rugs didn't had any soul.

sensanaty 10 hours ago | parent | next [-]

Every time pro-AI people bring up the Luddites I have to laugh, because they've clearly not had their magic little boxes actually tell them anything about the Luddites.

They argued the exact opposite, they wanted proper training on how to use the "damn machines" as people were literally dying because of being untrained in their usage. They were also then beset upon by hired thugs and mercenaries that proceeded to beat and even kill the Luddites for daring to speak out against horrible worker conditions in the factories.

It's pretty funny, the anti-luddites being exactly like the anti-luddites of yore.

motorest 9 hours ago | parent [-]

> They argued the exact opposite, they wanted proper training on how to use the "damn machines" as people were literally dying because of being untrained in their usage.

That's very interesting to hear, and also very unfortunate due to the loneliness your personal belief reflects. For example, your personal belief contrasts with what's clearly stated and supported in Wikipedia's article on Luddites. Is that because the whole world around you is wrong and you are the only lonely chap who is burdened by the truth?

https://en.wikipedia.org/wiki/Luddite

The interesting detail you are either unaware or chose to omit is that "training" only registered as a concern as industrialization completely eliminated the competitiveness and consequently need to what at the time represented high-skilled albeit manual labor. Luddite's arguments regarding training was not that industrial mills didn't had training, buy that "produced textiles faster and cheaper because they could be operated by less-skilled, low-wage labourers." This is a direct citation, not something that "magic little boxes" spit out. That's what motivated uprisings against these "magic little boxes": the threat that automaton posed to their livelihood for their once irreplaceable skillet being suddenly rendered useless overnight.

So, people like you who are uninformed and ignorant of history should spend some time trying to gather insights onto the problem to have a chance if understanding what's right in front of your nose. As Mark Twain said , history doesn't repeat itself but it often rhymes. Luddites represent those who failed to understand the impact that automation had on humanity, refused to understand what changes were placed upon them, and misplaced their energy and ultimate frustration and anger onto futile targets. The key factor is ignorance and unpreparedness. Fooling yourself with creative exercised covering up militant levels of ignorance does not change this one bit.

But you do you. The universe has this tendency to self correct.

sensanaty 6 hours ago | parent [-]

> Luddites represent those who failed to understand the impact that automation had on humanity

You contradict yourself in your prior paragraph here. Were they failing to understand the impact of automation, or did they realize "the threat that automaton[sic] posed to their livelihood for their once irreplaceable skillet[sic] being suddenly rendered useless overnight"?

From your same Wiki article, since we're pulling quotes here:

> 12,000 government troops, most of them belonging to militia or yeomanry units... which historian Eric Hobsbawm wrote was a larger number than the British army which the Duke of Wellington led during the Peninsular War.

> Four Luddites, led by a man named George Mellor, ambushed and assassinated mill owner William Horsfall of Ottiwells Mill in Marsden, West Yorkshire, at Crosland Moor in Huddersfield. Horsfall had remarked that he would "Ride up to his saddle in Luddite blood".

> I have been in some of the most oppressed provinces of Turkey; but never, under the most despotic of infidel governments, did I behold such squalid wretchedness as I have seen since my return, in the very heart of a Christian country.

Does this sound like the response you'd have against ignorant rabble who don't know what's coming for them? Or is it exactly because the luddites were proven 100% correct that the government felt it was needed to send out an army against them in order to quash their movement?

And if history rhymes as Mark Twain so succinctly put it, then it follows that we're at the same exact stanza as the luddites found themselves in back then, where the industrialists are seeking to automate away the livelihoods of large swathes of skilled people overnight. Except this time, the automation we're talking about will, at least in theory if you believe the AI hypesters, take away everyone's jobs, other than those "lucky" enough to be doing physical labor of some sort that we for now can't seem to get the robots to do. Oh, and of course conveniently the C-levels that are pushing this whole AI movement are irreplaceable as well.

> The universe has this tendency to self correct

Is the universe here going to self-correct back into, and I quote again from the same Wiki article:

> In the 19th century, occupations that arose from the growth of trade and shipping in ports, also as "domestic" manufacturers, were notorious for precarious employment prospects. Underemployment was chronic during this period,[40] and it was common practice to retain a larger workforce than was typically necessary for insurance against labour shortages in boom times.

Also, none of what you said goes against what I said in my original comment. The Pro-AI people are exactly the same as the anti-luddites back then, except for some inexplicable reason there's a subsection of the populace who are excited about getting replaced overnight and being forced back into squalor and horrific working conditions. Hell, they're the ones building the damn replacement machines in the first place! They're practically chomping at the bit to let AI eat the entire world, I guess because of some vague notion of "progress" that doesn't reflect for anyone other than the likes of Musk and Bezos who are ultimately the ones that reap the benefits of everyone else's sacrifices, all in the name of "progress" of course.

anon84873628 11 hours ago | parent | prev [-]

That's actually not what the luddites argued at all; they were very explicitly trying to protect their own economic interests.

An AI could have told you that in 2 seconds.