▲ | frizlab 5 days ago | |
> If the agent has a clean, relevant context explaining what global functions are available it tends to use them properly. STOP! The agent does not exist. There are no agents; only mathematical functions that have an input and produce an output. Stop anthropomorphizing LLMs, they are not human, they don’t do anything. It might seem like it does not matter; my take is it’s primordial. Humans are not machines and vice-versa. | ||
▲ | johnisgood 5 days ago | parent | next [-] | |
We have used the term "agent" in AI for some time. > The main unifying theme is the idea of an intelligent agent. We define AI as the study of agents that receive percepts from the environment and perform actions. This is from Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig. | ||
▲ | Sebalf 5 days ago | parent | prev [-] | |
Frankly, this take is so reductionistic that it's useless. You can substitute "mathematical functions" with "biochemistry" and apply the exact same argument to human beings. What I'd like is for people to stop pretending we have any idea what the hidden layer of an LLM is actually doing. We do not know at all. Yes, words like "statistics" and "mathematical functions" can accurately describe the underlying architecture of LLMs, but the actual mechanism of knowledge processing is not understood at all. It is exactly analogous to how we understand quite a lot about how neurons function at the cellular level (but far from everything, seeing as how complicated and opaque nature tends to be), but that we have no idea whatsoever what exactly is happening when a human being is doing a cognitive task. It is a fallacy to confuse the surface level understanding of how a transformer functions, to the unknown mechanisms that LLMs employ. |