Remix.run Logo
bee_rider 4 hours ago

Why can’t LLMs understand the big picture? I mean, a lot of companies have most of their information available in a digital form at this point, so it could be consumed by the LLM.

I think if anything, we have a better chance in the little picture: you can go to lunch with your engineering coworkers or talk to somebody on the factory floor and get insights that will never touch the computers.

Giant systems of constraints, optimizing many-dimensional user metrics: eventually we will hit the wall where it is easier to add RAM to machines than humans.

troupo 3 hours ago | parent | next [-]

> Why can’t LLMs understand the big picture?

Because LLMs don't understand things to begin with.

Because LLMs only have access to aource code and whatever .md files you've given them.

Because they have biases in their training data that overfit them on certain solutions.

Because LLMs have a tiny context window.

Because LLMs largely suck at UI/UX/design especially when they don't have referense images.

Because...

bee_rider 3 hours ago | parent | next [-]

> Because LLMs don't understand things to begin with.

Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.

The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.

gtowey 3 hours ago | parent | next [-]

I would say that it's very germane to my original statement. Understanding is absolutely fundamental to strategy and it is pretty much why I can say LLMs can't be strategic.

To really strategize you have to have mental model of well, everything and be able to sift through that model to know what elements are critical or not. And it includes absolutely everything -- human psychology to understand how people might feel about certain features or usage models, the future outlook for what popular framework to choose and will it as viable next year as it is today. The geographic and geopolitics of which cloud provider to use. The knowledge of human sentiment around ethical or moral concerns. The financial outlook for VC funding and interest rates. The list goes on and on. The scope of what information may be relevant is unlimited in time and space. It needs creativity, imagination, intuition, inventiveness, discernment.

LLMs are fundamentally incapable of this.

troupo 2 hours ago | parent | prev [-]

> I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.

LLMs can do neither reliably because to do that you need understanding which LLMs don't have. You need to learn from the codebase and the project, which LLMs can't do.

On top of that, to have the big picture LLMs have to be inside your mind. To know and correlate the various Google Docs and Figma files, the Slack discussions, the various notes scattered on your system etc.

They can't do that either because, well, they don't understand or learn (and no, clawdbot will not help you with that).

> The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.

These are not limitations of tooling, and no, LLM developers are not even close to overcoming, especially not "constantly". The only "overcoming" has been the gimmicky "1 million token context" which doesn't really work.

gtowey 3 hours ago | parent | prev [-]

Yeah, it's strange to me that the default assumption is that current LLMs are already human-level AGI.

butILoveLife 3 hours ago | parent | prev [-]

I basically just posted the same response. I generally agree with everything you said.

Only thing to add, maybe we have the most senior of seniors verifying the decisions of AI.

bee_rider 3 hours ago | parent [-]

Most senior could make sense (although I’d like to see a collection of independent guilds coordinated by an LLM “CEO” just to see how it could work—might not be good enough yet, but it’d be an interesting experiment).

Ultimately, I suspect “AI” (although, maybe much more advanced than current LLMs) will be able to do just about any information based task. But in the end only humans can actually be responsible/accountable.