| ▲ | The rise of AI as a threat to the S&P 500 [pdf](autonomy.work) |
| 111 points by seangrvs 9 hours ago | 88 comments |
| Dataset to accompany the report: https://adu.autonomy.work/posts/2025_05_21_ai_risk/ |
|
| ▲ | mansoor_ 8 hours ago | parent | next [-] |
| If the contents of the PDF were any more vague, it could be replaced by whitespace. |
| |
| ▲ | jschveibinz 6 hours ago | parent | next [-] | | To add to your comment: I think you could probably post an article with white space with a title "AI is going to kill [fill in the blank]" and it will at least initiate a discussion these days. | | |
| ▲ | timacles 4 hours ago | parent [-] | | You could probably start a company with that as your mission statement and get a couple million in VC | | |
| |
| ▲ | pthreads an hour ago | parent | prev | next [-] | | Nice! This comment has the potential to become the next "If my grandmother had wheels, she would have been a bike." | |
| ▲ | OldfieldFund 4 hours ago | parent | prev | next [-] | | It might've been written by an LLM. I'm not joking. | |
| ▲ | antisthenes 7 hours ago | parent | prev [-] | | It's already about 80% whitespace. I think it doesn't even pass the bar of an undergraduate research paper. | | |
| ▲ | antonvs 6 hours ago | parent | next [-] | | It's from a "totally independent, not-for-profit" research/consultancy/think tank which is "funded predominantly on a project to project basis". So this is a work product, implying all the constraints that go with that. | |
| ▲ | isoprophlex 7 hours ago | parent | prev [-] | | [flagged] | | |
|
|
|
| ▲ | yellow_lead 7 hours ago | parent | prev | next [-] |
| Adding something to the "Risks" section of your company's financial report reads more like CYA ("Cover Your Ass") than "We're afraid of AI!" behavior. |
| |
| ▲ | steveklabnik 7 hours ago | parent | next [-] | | Yes, the risks part of a 10-K is usually pretty comprehensive, and includes all kinds of things that may or may not be an issue: that's why they're risks, and not problems or showstoppers or something. Many of the 10-Ks I've read enumerate tons of things that have a very low chance of happening. I'd be curious about the position of these segments in the 10-Ks, like, if they're suddenly all at the top, that's much more interesting than being tacked on to the end. | | |
| ▲ | dmurray 6 hours ago | parent [-] | | I'm not sure the position is significant. It's free to put any bad thing in the risks section of your 10-K; investors aren't going to shun your company over it. If you fail to put the risk in, the bad thing happens, and your company loses value, on the other hand, you may get sued for securities fraud - and courts have been oddly receptive to these suits. It's like any other clause that gets added to any other mostly-boilerplate legal document over time: one firm adds it, pretty soon everyone copies their work and it's a standard term. It's viral. How fast this spreads among company filings is a matter of epidemiology, not something that actually tells you the companies' outlook. | | |
| ▲ | steveklabnik 6 hours ago | parent [-] | | I'm not sure it is either, I don't read enough 10-Ks to make a strong claim here, it's just always felt like the ones I've read have been vaguely ordered by importance, and I wonder if that's actually true or not. |
|
| |
| ▲ | pyuser583 7 hours ago | parent | prev | next [-] | | Facebook once listed "adoption of mobile devices" in the "Risks" section of their reports. | | |
| ▲ | arcticfox 7 hours ago | parent [-] | | I mean, this was absolutely a risk to them. They could have easily lost during the transition. | | |
| ▲ | omneity 6 hours ago | parent [-] | | They did and then spent $5B purchasing Instagram and Whatsapp to recover. | | |
| ▲ | HWR_14 6 hours ago | parent | next [-] | | I thought they spent $17B on Whatsapp. | |
| ▲ | tsunamifury 6 hours ago | parent | prev [-] | | 19 billion buying WhatsApp. Instagram was moments before ipo. Also the FB html5 play at the time was a decent bridge to the app play which did of course explode growth. So let’s be fair they didn’t fail but they acquired to grow even more. |
|
|
| |
| ▲ | echelon 7 hours ago | parent | prev [-] | | Here are some S&P 500 companies that are doomed: - https://en.wikipedia.org/wiki/The_Interpublic_Group_of_Compa... - https://en.wikipedia.org/wiki/Omnicom_Group - https://en.wikipedia.org/wiki/Warner_Bros._Discovery - https://en.wikipedia.org/wiki/Fox_Corporation - https://en.wikipedia.org/wiki/Paramount_Global I'm sure there are a lot more. | | |
| ▲ | andsoitis 6 hours ago | parent | next [-] | | Why do you think Warner Bros, Fox, and Paramount are doomed? | | |
| ▲ | bitmasher9 5 hours ago | parent [-] | | The most around content creation is eroding. We’re seeing many small AI videos become viral, and the length of some of them are reaching 5-10min. What happens when anyone can write a script and have a feature length movie or 12 episode season. | | |
| ▲ | manmal 5 hours ago | parent [-] | | Vibing a movie will be possible, but people also won’t pay money for watching that. It will be great for script writers to be able to make an MVP of their movie first, and someone will still need to produce this into an actual movie. | | |
| ▲ | SoftTalker 3 hours ago | parent | next [-] | | Why would I pay money for a movie when I can just create my own from a few prompts, perhaps at the cost of watching a few ads? | | |
| ▲ | SketchySeaBeast 3 hours ago | parent | next [-] | | You know, I can't see myself doing that with a movie. That would be the ultimate expression of the mindless action flick, and those have stopped appealing to me. I think about all the movies I've really enjoyed that had some friction in viewing - either because it was a genre I didn't normally like or I just didn't have any interest in viewing. All that would be gone. But ooooh boy, would porn change. Maybe that would be the end of pornography, the perfect wish fulfillment making it no longer enjoyably. | |
| ▲ | 2 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | FirmwareBurner 5 hours ago | parent | prev [-] | | >Vibing a movie will be possible, people also won’t pay money for watching that Youtube, Facebook, Instagram, TikTok all beg to differ. AI slop farms are making bank over there, especially on short form content aimed at little kids. They get hundreds of millions of views, earning more than what Disney/WB makes on some of their high budget garbage movies in cinema. | | |
| ▲ | andsoitis 5 hours ago | parent | next [-] | | > especially on short form content aimed at little kids. They get hundreds of millions of views, earning more than what Disney/WB makes on some high budget garbage movies in cinema. What is one specific example? | | | |
| ▲ | manmal 4 hours ago | parent | prev | next [-] | | Oh I meant pay money as in, pay per view or cinema. | | |
| ▲ | FirmwareBurner 4 hours ago | parent [-] | | If you're a business trying to making money at all costs, and the money comes from advertisers instead of directly from viewers, will your wallet complain? | | |
| ▲ | SketchySeaBeast 3 hours ago | parent [-] | | But it doesn't change the consumer's spending power. I haven't changed my movie consumption habit just because I watch youtube videos. It's free content. | | |
| ▲ | j-bos 2 hours ago | parent | next [-] | | Datapoint of one more: I have my fiction consumptions has been trending down ever since I got YT premium. | | | |
| ▲ | FirmwareBurner 2 hours ago | parent | prev [-] | | >But it doesn't change the consumer's spending power. I haven't changed my movie consumption habit just because I watch youtube videos. I think you're misunderstanding. Consumers don't need to spend money directly, for AI slop farms on Youtube or TikTok to make money, since advertisers pay for the views they get on their videos, not you. IT's not all about YOU, YOU are probably not the targe audience, but they do make money even if YOU don't watch that stuff. | | |
| ▲ | SketchySeaBeast 2 hours ago | parent [-] | | No, I understand that. So how do AI slop farms filling Youtube lead to Warner Bros, Fox, and Paramount being doomed? The argument seems to be that they are doomed because the AI farms are making money, but, as you're saying here, consumers don't pay that. So I don't understand the argument. | | |
| ▲ | FirmwareBurner 2 hours ago | parent [-] | | >So how do AI slop farms filling Youtube lead to Warner Bros, Fox, and Paramount being doomed? Where did I say they'll be doomed? I said they're monetizing short form content on YouTube/Tiktok using AI slop, and that can make more money than some crap cinema movies at the box office. | | |
| ▲ | SketchySeaBeast 2 hours ago | parent [-] | | Ah, I assumed it was a follow-up to the argument around Warner Bros, Fox, and Paramount being doomed, which is what kicked that discussion off. |
|
|
|
|
|
| |
| ▲ | HWR_14 2 hours ago | parent | prev [-] | | Disney/WB regularly make hundreds of millions of USD on their high budget movies. A view is worth less than $1. How are the AI slop farms making all that money? |
|
|
|
| |
| ▲ | 6 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | fsckboy 16 minutes ago | parent | prev | next [-] |
| it would not be very long before any successful threat to the S&P 500 becomes part of the S&P 500. The king is dead, long live the king. |
|
| ▲ | seydor 8 hours ago | parent | prev | next [-] |
| I would like to make a correction. "1 in X companies" should be replaced with "1 in X marketing departments". The fact that the companies talk about AI does not mean they will do anything about it. It's trendy |
| |
| ▲ | Permit 8 hours ago | parent | next [-] | | From the linked PDF: > This report uses a range of cutting-edge LLM-assisted data techniques
to extract key risk information from S&P 500 company filings.
Following the recent boom in generative AI, we examine reported
risks from these leading firms related to artificial intelligence. We
clarify the extent to which firms are reporting new AI related risks,
what kind of risks are being reported and what these indicate about
the broader dynamics of AI in big business. This is unrelated to marketing departments. | | |
| ▲ | blharr 6 hours ago | parent [-] | | Company filings are basically marketing for the shareholders. They want to mention AI to boost interest and such, and they end up mentioning AI risks to hedge/cover their backs |
| |
| ▲ | Zenst 8 hours ago | parent | prev | next [-] | | Marketing and hype have, as they would say `synergised` in recent decades under new media. Which sadly distracts from the analysis of companies, which is why valuation of companies is more based on PR over assets. I would go as far as calling it hypedinflation, as all that money that ends up in the market, comes from consumers in the end. So saying AI is the biggest threat to the S&P is glossing over the root causes. Analysts, getting sucked into the marketing hype, are self-fulfilling in that some people go on their recommendations. After all, in the past if a bank was questioned about how stable it is, could easily see a domino of withdrawals that snowballs into actualy becomming unstable, even if it wasn't before. | |
| ▲ | epolanski 8 hours ago | parent | prev [-] | | I don't think those words come from ads, rather than investor reports and calls. |
|
|
| ▲ | netrap 8 hours ago | parent | prev | next [-] |
| >> Risks to jobs rarely feature among reported risks, despite
being a prominent public concern. If there is a risk to jobs, it wouldn't show up here since actually less jobs is "good" for business... |
| |
| ▲ | SoftTalker 8 hours ago | parent | next [-] | | > less jobs is "good" for business Up to a point. Then you no longer have customers. | | |
| ▲ | ben_w 7 hours ago | parent | next [-] | | Everyone is incentivised to do it, even when none of them want all of them to do it. Prisoner's dilemma, with the businesses as the 'prisoners'. One of the ways to change the Nash equilibrium for that game is for enough people to empower some outside agent that punishes defectors. (Metaphorically, for the original prisoners in the thought experiment, a gangland boss). | | |
| ▲ | benreesman 7 hours ago | parent [-] | | Equilibria for iterated vs. non-iterated play in the prisoner's dilemma are generally very different. To the extent that the current leadership of government and business are facing a collective action problem it is because different actors have a different number of iterations they are optimizing for. Put differently, when it's #CrimeSeason, you gotta get yours before the bill is due, and different crooks on different schedules. |
| |
| ▲ | nikolayasdf123 8 hours ago | parent | prev | next [-] | | I think about this all the time | |
| ▲ | Nasrudith 7 hours ago | parent | prev [-] | | Pretty much everything good is good only 'up until a point'. The dose makes the poison after all. |
| |
| ▲ | nine_k 8 hours ago | parent | prev | next [-] | | Lower expenses is good for business. Not having to pay employees lowers expenses. But a business needs paying customers, preferably employed by someone else. Other businesses having to pay their employees is good for business in this regard. | | | |
| ▲ | barbazoo 8 hours ago | parent | prev | next [-] | | Yeah, public concerns are often diametrically opposed to corporate concerns. | |
| ▲ | psunavy03 7 hours ago | parent | prev [-] | | There's a reason Henry Ford paid his employees enough to buy one of his cars. | | |
| ▲ | tonyedgecombe 7 hours ago | parent [-] | | It is a myth that it was so they could afford one of his cars. The reality is that he couldn’t attract the right workers because factory work is sole destroying. | | |
| ▲ | scsh 7 hours ago | parent | next [-] | | And that's why he paid them in shoes. | |
| ▲ | kgwgk 7 hours ago | parent | prev [-] | | Actually sitting - or even standing - at an assembly line station is sole conserving. |
|
|
|
|
| ▲ | simonw 8 hours ago | parent | prev | next [-] |
| They used LLMs to look at the risk disclosures in recent 10-K filings, and found that 3/4 of S&P 500 companies mentioned additional AI risks compared tot heir own previous 10-K - AI-driven cyber attacks, deepfakes, energy demands, regulation (AI EU act) etc. |
| |
| ▲ | quickthrowman 6 hours ago | parent | next [-] | | It’s free to list AI as a risk in a 10K filing to the SEC. It’s a lot more expensive to pay out a securities fraud settlement if you don’t list it and then suffer a loss that can be pinned on ‘AI’. | | |
| ▲ | elictronic 5 hours ago | parent [-] | | It’s like all the stupid user license agreement stupidity. I look forward to the eventual lawsuit against companies hiding their actual risks in the chaff.
Do not use your groin to stop the chainsaw. (To any lawyers, this is the chaff). |
| |
| ▲ | curious_cat_163 6 hours ago | parent | prev [-] | | And, so what? | | |
| ▲ | simonw 2 hours ago | parent [-] | | I was helping people out who need to know if the PDF was worth their time or not. |
|
|
|
| ▲ | amelius 7 hours ago | parent | prev | next [-] |
| If AI can trade stocks and derivatives better than any humans, maybe we'll see wealth accumulation in a few large AI firms. And the stock market becomes effectively useless to the rest of us. |
| |
| ▲ | mu53 7 hours ago | parent | next [-] | | What AI? ML with gradient descent? Neural nets with deep learning? LLMs? Or the abstract concept of AGI that may or may not possible, but definitely isn't here yet? Hedge funds and investment banks are already using these tools to the max, and the markets are plenty profitable for everyone | |
| ▲ | simantel 5 hours ago | parent | prev | next [-] | | RenTec did this successfully, but their strategies didn't scale up beyond ~$30B AUM before they started moving their markets too much (with the Medallion Fund). | |
| ▲ | __MatrixMan__ 7 hours ago | parent | prev | next [-] | | It would be a reductio ad absurdum for the stock market which is long overdue. | |
| ▲ | im3w1l 5 hours ago | parent | prev | next [-] | | I think what will happen is that ordinary people will invest in a mutual fund managed by AI and/or the funds people already invest in will start adopting AI-tooling. I think this scenario is plausible because the path to this scenario is so smooth so it will be the default outcome unless something strange happens to prevent it. | | |
| ▲ | SketchySeaBeast 3 hours ago | parent [-] | | I'm wondering what a market looks like where everyone is running an AI that makes the optimal purchases. The market needs bag holders. I also don't know that it would change my behaviour. If my goal is long term investment success what's the downside of my continuing to invest in broad market funds? I don't need an AI making split second decisions if my investment horizon is still 30 years. | | |
| ▲ | im3w1l 15 minutes ago | parent [-] | | Wild guess, but I think trading activity goes down. AI focuses instead on which equity issuances to participate in. |
|
| |
| ▲ | kjkjadksj 6 hours ago | parent | prev | next [-] | | They already use models for trading better than any human and have probably done so since the 1950s. And what do you know, wealth accumulation has been significant over the last 70 years. | | |
| ▲ | amelius 6 hours ago | parent [-] | | Yeah but wealth accumulation can only go so far, because at some point people will start to question the validity of the entire market. | | |
| ▲ | snoman 3 hours ago | parent [-] | | With index funds effectively propping up the S&P etc. people are already questioning it. Consider that millions of people are parking billions of dollars in index funds that just track the ~500 biggest companies with the expectation they’ll all just get bigger. |
|
| |
| ▲ | k-i-r-t-h-i 6 hours ago | parent | prev | next [-] | | trading =/= investing | | | |
| ▲ | alephnerd 6 hours ago | parent | prev | next [-] | | You do realize that all the math used in AI/ML has been heavily used in Finance for decades right? Everything is Applied Math if you squint hard enough. All this AI/ML doomerism and boosterism is ridiculous. If you do not understand how gradient descent works or why as of today GPUs are better suited for model training compared to CPUs you should not have a say in this discussion. Most conversations around AI/ML appear to basically be pop-philosophy discussions that aren't even that grounded in philosophy fundamentals. If you have trash fundamentals, you will have a trash understanding of the world. | | |
| ▲ | mindwok 5 hours ago | parent [-] | | You can make this point without needing to insult everyone’s intelligence and gate-keep discussions about AI. |
| |
| ▲ | lofaszvanitt 7 hours ago | parent | prev [-] | | Yeah, but AI will be banned before that happens. | | |
| ▲ | dzink 7 hours ago | parent | next [-] | | Not if the lobbying budget of those making the AI money is larger than those that lose money to AI in the market. | |
| ▲ | razemio 7 hours ago | parent | prev [-] | | How would you ban AI? It is unstoppable now. | | |
|
|
|
| ▲ | MinimalAction 5 hours ago | parent | prev | next [-] |
| I don't understand. If AI replaces jobs by being a "cheaper" alternative, wouldn't it deliver the same productivity metrics while saving costs? This is insanely simplified, I know, but the premise holds, I feel. |
| |
| ▲ | _diyar 4 hours ago | parent [-] | | Economy grows from increased labor and capital investment. If AI is a "cheaper" alternative, ie. is more efficient with respect to capital, it needs less labor for the same output. But now think about how most of the population can afford their lifestyle, ie. buying stuff from S&P 500 companies: selling their time as labor. | | |
|
|
| ▲ | MarkusQ 7 hours ago | parent | prev | next [-] |
| The only risk this report clearly shows is the existential risk AI poses to consulting firms that specialize in writing vacuous reports. Anyone who makes their living writing pompous puffballs of poop like this should be really, really worried. |
|
| ▲ | smoothbenny 2 hours ago | parent | prev [-] |
| is ai anything more than a shakedown at this point? aside from a few very specific corporate backroom use cases, the pitch for the product currently sold as “ai” is loaded with vague, unclear benefits, marginal utility, and limited real world adoption. then there’s another product, “ai risks”, many of which are easily identifiable to an average person (layoffs, deepfakes, calculated and miscalculated state violence without accountability), and to the c-suite cause fear and panic, resulting in massive overspending to mitigate these risks, most likely by investing in a “good” ai tool to counter the “bad” ai tool. how long can the grift continue without any actual positive product to purchase? |