Remix.run Logo
Yizahi 2 days ago

I think we should split definition somehow, between what LLMs can do today (or next few years) with how big a thing this particular capability can be (a derivative of the capability). And then what some future AI could do and with how big a thing that future capability could be.

I regularly see people who distinguish between current and future capabilities, but then still lump societal impact (how big a thing could be) into one projection.

The key bubble question is - if that future AI is sufficiently far away (for example if there will be a gap, a new "AI winter" for a few decades), then does this current capability justify the capital expenditures, and if not then by how much?

tim333 2 days ago | parent [-]

Yeah, and how long can OpenAI etc. hang on without making profits.