Remix.run Logo
n8cpdx 8 hours ago

> If you're big enough that we'd be screwed without you then take whatever risks you like with impunity".

I know financially it will be bad because number not go up and number need go up.

But do we actually depend on generative/agentic AI at all in meaningful ways? I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact. If the studies are at all reliable all the programmers will be more efficient. Maybe we’d be better off because there wouldn’t be so much AI slop.

It is very far from clear that there is any real value being extracted from this technology.

The government should let it burn.

Edit: I forgot about “country girls make do”. Maybe gen AI is a critical pillar of the economy after all.

thewebguyd 6 hours ago | parent | next [-]

> I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact.

I mostly agree, but I don't think it's the model developers that would get bailed out. OpenAI & Anthropic can fail, and should be let to fail if it comes to that.

Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.

I also think they should be let to fail, but there's no way the US GOV ever allows them to.

palmotea 4 hours ago | parent [-]

> Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.

> I also think they should be let to fail, but there's no way the US GOV ever allows them to.

There's different ways to fail, though: liquidation, and a reorganization that wipes out the shareholders.

OpenAI could be liquidated and all its technology thrown in to the trash, and I wouldn't shed a tear, but Microsoft makes (some) stuff (cough, Windows) that has too much stuff dependent on it to go away. The shareholders can eat it (though I think broad-based index funds should get priority over all other shareholders in a bankruptcy).

pksebben 7 hours ago | parent | prev [-]

I expect the downvotes to come from this as they always seem to do these days, but I know from my personal experience that there is value in these agents.

Not so much for the work I do for my company, but having these agents has been a fairly huge boon in some specific ways personally:

- search replacement (beats google almost all of the time)

- having code-capable agents means my pet projects are getting along a lot more than they used to. I check in with them in moments of free time and give them large projects to tackle that will take a while (I've found that having them do these in Rust works best, because it has the most guardrails)

- it's been infinitely useful to be able to ask questions when I don't know enough to know what terms to search for. I have a number of meatspace projects that I didn't know enough about to ask the right questions, and having LLMs has unblocked those 100% of the time.

Economic value? I won't make an assessment. Value to me (and I'm sure others)? Definitely would miss them if they disappeared tomorrow. I should note that given the state of things (large AI companies with the same shareholder problems as MAANG) I do worry that those use cases will disappear as advertising and other monetizing influences make their way in.

Slop is indeed a huge problem. Perhaps you're right that it's a net negative overall, but I don't think it's accurate to say there's not any value to be had.

s1mplicissimus 5 hours ago | parent [-]

I'm glad you had positive experiences using this specific technology.

Personally, I had the exact opposite experience: Wrong, deceitful responses, hallucinations, arbitrary pointless changes to code... It's like that one junior I requested to be removed from the team after they peed in the codebase one too many times.

On the slop i have 2 sentiments: Lots of slop = higher demand for my skills to clean it up. But also lots of slop = worse software on probably most things, impacting not just me, but also friends, family and the rest of humanity. At least it's not only a downside :/