| ▲ | troupo 3 hours ago | |||||||||||||
> Why can’t LLMs understand the big picture? Because LLMs don't understand things to begin with. Because LLMs only have access to aource code and whatever .md files you've given them. Because they have biases in their training data that overfit them on certain solutions. Because LLMs have a tiny context window. Because LLMs largely suck at UI/UX/design especially when they don't have referense images. Because... | ||||||||||||||
| ▲ | bee_rider 3 hours ago | parent | next [-] | |||||||||||||
> Because LLMs don't understand things to begin with. Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM. The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming. | ||||||||||||||
| ||||||||||||||
| ▲ | gtowey 3 hours ago | parent | prev [-] | |||||||||||||
Yeah, it's strange to me that the default assumption is that current LLMs are already human-level AGI. | ||||||||||||||