Remix.run Logo
idopmstuff 2 days ago

> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?

The implied answer to this question really just misunderstands the tradeoffs of the world. We had plenty of money and effort going into our technology before AI, and we got... B2B SaaS, mostly.

I don't disagree that the world would be better off if all of the money going into so many things (SaaS, crypto, social media, AI, etc.) was better allocated to things that made the world better, but in order for that to happen, we would have to be in a very different system of resource allocation than capitalism. The issue there is that capitalism has been absolutely core to the many, many advances in technology that have been hugely beneficial to society, and you if you want to allocate resources differently than the way capitalism does, you lose all of those benefits and probably end up worse off as a result (see the many failures of communism).

> So I ask: Why is adding AI the priority here? What could have been if the investment went into making these apps better?

> I’m not naive. What motivates people to include AI everywhere is the promise of profit. What motivates most AI startups or initiatives is just that. A promise.

I would honestly call this more arrogant than naive. Doesn't sound like OP has worked at any of the companies that make these apps, but he feels comfortable coming in here and presuming to know why they haven't spent their resources working on the things he thinks are most important.

He's saying that they're not fixing issues with core functionality but instead implementing AI because they want to make profit, but generally the sorts of very severe issues with core functionality that he's describing are pretty damaging to the revenue prospects of a company. I don't know if those issues are much less severe than he's describing or if there's something else going on with prioritization. I don't know if the whole AI implementation was competitive with fixing those - maybe it was just an intern given a project, and that's why it sucks.

I have no idea why they've prioritized the things they have, and neither does the author. But just deciding that they're not fixing the right things because they implemented an AI feature that he doesn't like is not a particularly valid leap of logic.

> Tech executives are robbing every investor blind.

They are not. Again, guy with a blog here is deciding that he knows more than the investors about the things they're investing in. Come on. The investors want AI! Whether that's right or wrong, it's ridiculous to suggest they're being robbed blind.

> Unfortunately, people making decisions (if there are any) only chase ghosts and short term profits. They don’t think that they are crippling their companies and dooming their long term profitability.

If there are any? Again, come on. And chasing short term profits? That is obviously and demonstrably incorrect - in the short term, Meta, Anthropic, OpenAI and everybody else is losing money on AI. In the long term, I'm going to trust that Mark Zuckerberg and Sam Altman, whether you like them or hate them, have a whole lot better idea of whether or not they're going to be profitable in the long term than the author.

This reads like somebody who's mad that the things he wants to be funded aren't being funded and is blaming it on the big technology of the day then trying to back into a justification for that blame.