| ▲ | deelayman 6 hours ago | |
I wonder if that quote is still applicable to systems that are hardwired to learn from decision outcomes and new information. | ||
| ▲ | advisedwang 4 hours ago | parent | next [-] | |
LLMs do not learn as they go in the same way people do. People's brains are plastic and immediately adapt to new information but for LLMs: 1. Past decisions and outcomes get into the context window, but that hasn't actually updated any model weights. 2. Your interaction possible eventually gets into the training data for a future LLM. But this is incredibly diluted form of learning. | ||
| ▲ | svieira 6 hours ago | parent | prev [-] | |
What (or who) would have been responsible for the Holodomor if it had been caused by an automated system instead of deliberate human action? | ||