Remix.run Logo
manwe150 2 hours ago

But think about it this way: something simple like Slack charges $9/month/person and companies already pay that on many behalf. How hard would it be to imagine all those same companies (and lots more) would pay $30/month/employee for something something AI? Generating an extra $400 per year in value, per employee, isn't that much extra.

johnvanommen an hour ago | parent | next [-]

> Generating an extra $400 per year in value, per employee, isn't that much extra.

I agree, and would add that it’s contributing to inflation in hard assets.

Basically:

* it’s a safe bet that labor will have lower value in 2031 than it has today

* if you have a billion to spend, and you agree, you will be inclined to put your wealth into hard assets, because AI depends on them

In a really abstract way, the world is not responsible for feeding a new class of workers: robots.

And robots consume electricity, water, space, and generate heat.

Which is why those sectors are feeling the affects of supply and demand.

whattheheckheck 25 minutes ago | parent [-]

The world IS responsible for handling the people. Thats the whole fucking reason we made society to take care of children. Nothing is inevitable. It serves the interests of the few.

an hour ago | parent | prev | next [-]
[deleted]
doodlebugging 2 hours ago | parent | prev | next [-]

Most people in the economy do not use Slack. That tool may be most beneficial to those people who stand to lose jobs to AI displacement. Maybe after everyone is pink-slipped for an LLM or AI chatbot tool the total cost to the employer is reduced enough that they are willing to spend part of the money they saved eliminating warm bodies on AI tools and willing to pay a higher per employee price.

I think with a smaller employee pool though it is unlikely that it all evens out without the AI providers holding the users hostage for quarterly profits' sake.

zozbot234 2 hours ago | parent | prev [-]

That AI will have to be significantly preferable to the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.

johnvanommen an hour ago | parent [-]

> the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.

It’s not a challenge at all.

To win, all you need is to starve your competitors of RAM.

RAM is the lifeblood of AI, without RAM, AI doesn’t work.

ndriscoll 24 minutes ago | parent [-]

Assuming high bandwidth flash works out, RAM requirements should be drastically reduced as you'd keep the weights in much higher capacity flash.

> Sample HBF modules are expected in the second half of 2026, with the first AI inference hardware integrating the tech anticipated in early 2027.

https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...