| ▲ | chis a day ago |
| Yeah I just don't buy that it would somehow help AI companies for everyone to be existentially afraid of their technology. It seems much more reasonable to think that they really believe the things they're saying, than that it's some kind of 4d chess. Additionally Dario has just been really accurate with his predictions so far. For instance in early 2025 he predicted that nearly 100% of code would be written with AI in 2026. |
|
| ▲ | alecbz a day ago | parent | next [-] |
| I think if you just look at what people like e.g. Sam Altman are doing it's clear that they don't believe everything that they're saying regarding AI safety. > nearly 100% of code would be written with AI in 2026 I feel like this is kind of a meaningless metric. Or at least, it's very difficult to measure. There's a spectrum of "let AI write the code" from "don't ever even look at the code produced" to "carefully review all the output and have AI iterate on it". Also, it seems possible as time goes on people will _stop_ using AI to write code as much, or at least shift more to the right side of that spectrum, as we start to discover all kinds of problems caused by AI-authored code with little to no human oversight. |
|
| ▲ | roxolotl a day ago | parent | prev | next [-] |
| It helps with sales because they position it as “we can give you the power to end the world.” There’s plenty of people who want to wield that sort of power. It doesn’t have to be 4D chess. Maybe they are being genuine. But it is helping sales. |
| |
| ▲ | DennisP a day ago | parent | next [-] | | They're not saying today's AI has that kind of power, and they're not saying future superintelligent AI will give you that power. They're saying it will take all power from you, and possibly end you. If this is some kind of twisted marketing, it's unprecedented in history. Oil companies don't brag about climate change. Tobacco companies don't talk about giving people cancer. If AI companies wanted to talk about how powerful their AI will be, they could easily brag about ending cancer, curing aging, or solving climate change. They're doing a bit of that, but also warning it might get out of control and kill us all. They're getting legislators riled up about things like limiting data centers. People saying this aren't just company CEOs. It's researchers who've been studying AI alignment for decades, writing peer reviewed papers and doing experiments. It's people like Geoffrey Hinton, who basically invented deep learning and quit his high-paying job at Google so he could talk freely about how dangerous this is. This idea that it's a marketing stunt is a giant pile of cope, because people don't want to believe that humanity could possibly be this stupid. | | |
| ▲ | otabdeveloper4 a day ago | parent [-] | | > If this is some kind of twisted marketing, it's unprecedented in history. They're marketing AI to investors, not to end-user plebs. This is a pump-and-dump scheme. | | |
| ▲ | DennisP a day ago | parent [-] | | Exxon has never bragged to investors that they'd burn so much oil, civilization would collapse from climate change. They've always talked about how great fossil fuels are for the economy and our living standards. It makes no sense to sell apocalypse to investors either. | | |
| ▲ | otabdeveloper4 20 hours ago | parent [-] | | They're selling FOMO to investors. "Last chance to jump on the AI train, invest into your future robot overlord or be turned into biodiesel for datacenters in the future." | | |
| ▲ | DennisP 14 hours ago | parent [-] | | There's no reason to think an out-of-control ASI would spare its investors. | | |
| ▲ | otabdeveloper4 7 hours ago | parent [-] | | There's no reason to think it wouldn't. Shouldn't you hedge your bets? Also, you can probably make a shitton of money as an out-of-control-AI-investor while the world is in the process of being destroyed. | | |
| ▲ | DennisP 2 hours ago | parent [-] | | There are all sorts of things you could do that might make an AI like you, and none of them have more justification than any other. This is not an argument AI firms are making. I agree that short-term greed is driving investment, but it would drive just as much investment if AI companies were not warning of apocalypse. Probably it would drive even more, because there'd be less risk of regulatory interference, and more future profit to discount into the present. So why are they making those warnings? It doesn't benefit them. The simplest explanation is that this stuff actually is dangerous, and people who know that are worried. |
|
|
|
|
|
| |
| ▲ | cyanydeez a day ago | parent | prev [-] | | Isn't it more: "We can give you the power to eliminate the people in your organization you dont like" and expands into basically dismantling all government & business for the benefit of the guy with the largest wallet? It's hard to see as anything but a button anyone with enough money can press and suddenly replace the people that annoy them (first digitally then likely, into flesh). |
|
|
| ▲ | edbaskerville a day ago | parent | prev | next [-] |
| Does anyone have good estimates of what percent of real production code is currently being written by LLMs? (& presumably this is rather different for your typical SaaS backend vs. frontend vs. device drivers vs. kernel schedulers...) |
| |
| ▲ | mbesto a day ago | parent | next [-] | | By all companies? I'd say less than 10% of all LOC today are generated by LLMs. | | |
| ▲ | scottyah a day ago | parent [-] | | Really? In my bubble of internet news it seems the sheer number of companies that have formed and shipped LLM code to production has already surpassed existing companies. I've personally shipped dozens of (mediocre) human months or years worth of code to "production", almost certainly more than I've ever done for companies I've worked at (to be fair I've been a lot more on the SRE side for a few years now). |
| |
| ▲ | SpicyLemonZest a day ago | parent | prev [-] | | Depends on your reference class. There's a lot of companies and teams where it's literally 100%, and I would be surprised if there were any top company where it's below 75%. I wouldn't be terribly surprised if the industry-wide percentage were a lot lower, although I also have no idea how you'd measure that. | | |
| ▲ | otabdeveloper4 a day ago | parent [-] | | > I would be surprised if there were any top company where it's below 75% I would be surprised if there were any top company where it's above 5%. The slop Claude generates isn't going anywhere near production without being edited by hand. | | |
| ▲ | SpicyLemonZest a day ago | parent [-] | | Perhaps it depends on what you mean by "edited by hand"? It's definitely still common for human beings to review generated code and tell Claude "no you need to do it this way". But most developers at Google, Meta, etc. no longer open up an IDE and type in code themselves. | | |
| ▲ | otabdeveloper4 20 hours ago | parent [-] | | I don't give a bleep what the bleeps at Google and Meta are doing. (Judging by the quality of ""software"" they put out - probably nothing all day.) In reality it's extremely rare that AI generated code isn't combed through line-by-line and refactored. (For real software, that is, not VC scams like OpenClaw or litellm or whatever.) |
|
|
|
|
|
| ▲ | b00ty4breakfast a day ago | parent | prev | next [-] |
| it pushes the idea that these programs are super amazing and powerful to people who are non-technical. It also allows them to control the narrative of how exactly AI is dangerous to society. Rather than worry about the energy consumption of all these new datacenters, they can redirect attention to some far-off concern about SHODAN taking over Citadel Station and turning the inhabitants into cyber-mutants or whatever. |
|
| ▲ | rootusrootus a day ago | parent | prev | next [-] |
| > nearly 100% of code would be written with AI in 2026 HN is the only place I have heard it seriously suggested that anything like this is happening or likely to happen. We certainly get a lot of cheerleading here, my guess is that in the trenches the fraction is way lower. |
|
| ▲ | Terr_ a day ago | parent | prev | next [-] |
| > Yeah I just don't buy that it would somehow help AI companies for everyone to be existentially afraid of their technology. It makes more sense if one breaks that "everyone" into subgroups. A good first-pass split would be "investors" versus "everyone else." From their perspective: Rich Investor Alice rushing over with bags of money because of FOMO >>> Random Person Bob suffers anxiety reading the news. One can hone it a bit more by thinking about how it helps them gain access to politicians, media that's always willing to spread their quotes, and even just getting CEO Carol's name out there. |
|
| ▲ | haritha-j a day ago | parent | prev | next [-] |
| When your statements directly influence millions of dollars in revenue, its always 4D chess. If Sam altman beleives half the stuff he's peddling, I'd be very shocked. |
|
| ▲ | autoexec a day ago | parent | prev | next [-] |
| > It seems much more reasonable to think that they really believe the things they're saying It seems more reasonable to me to think that they know it's bullshit and it's just marketing. Not necessarily marketing to end users as much as investors. It's very hard to take "AGI in 3 years" seriously. |
| |
| ▲ | mghackerlady a day ago | parent [-] | | AGI in 3 years is literally not possible as it stands. Our current idea of "AI" as an LLM fundamentally will never be able to reach that goal without some absolutely massive changes | | |
| ▲ | autoexec a day ago | parent [-] | | At least Dario Amodei kept the window short. When AGI fails to magically appear in 3 years he will be discredited and we can all agree that he's full of shit and treat everything he says accordingly. This is a huge improvement over the "just 10 years away" prophesying we usually get. |
|
|
|
| ▲ | goatlover a day ago | parent | prev [-] |
| I'd argue if they really believed AI was an existential threat, they would shut down research and encourage everyone else to halt R&D. But then again, the Cold War happened, even over the objections of physicists like Einstein & Oppenheimer. |