▲ | glenstein 8 days ago | ||||||||||||||||||||||||||||
>History is therefore a constant act of reinterpretation and triangulation, which is something that LLMs, as linguistic averaging machines, simply cannot do. I know you weren't necessarily endorsing the passage you quoted, but I want to jump off and react to just this part for a moment. I find it completely baffling that people say things in the form of "computers can do [simple operation], but [adjusting for contextual variance] is something they simply cannot do." There was a version of this in the debate over "robot umps" in baseball that exposed the limitation of this argument in an obvious way. People would insist that automated calls of balls and strikes loses the human element, because human umpires could situationally squeeze or expand the strike zone in big moments. E.g. if it's the World Series, the bases are loaded, the count is 0-2, and the next pitch is close, call it a ball, because it extends the game, you linger in the drama a bit more. This was supposedly an example of something a computer could not do, and frequently when this point was made it induced lots of solemn head nodding in affirmation of this deep and cherished baseball wisdom. But... why TF not? You actually could define high leverage and close game situations, and define exactly how to expand the zone, and machines could call those too, and do so more accurately than humans. So they could better respect contextual sensitivity that critics insist is so important. Even now, in fits and starts, LLMs are engaging in a kind of multi-layered triangulating, just to even understand language. It can pick up on multilayered things like subtext or balance of emphasis, or unstated implications, or connotations, all filtered through rules of grammar. It doesn't mean they are perfect, but calibrating for context or emphasis that is most important for historical understanding seems absolutely within machine capabilities, and I don't know what other than punch drunk romanticism for "the human element" moves people to think that's an enlightened intellectual position. | |||||||||||||||||||||||||||||
▲ | delusional 8 days ago | parent [-] | ||||||||||||||||||||||||||||
> "[...] dynamically changing the zone is something they simply cannot do." But... why TF not? Because the computer is fundamentally knowable. Somebody defined what a "close game" ahead of time. Somebody defined what a "reasonable stretch" is ahead of time. The minute it's solidified in an algorithm, the second there's an objective rule for it, it's no longer dynamic. The beauty of the "human element" is that the person has to make that decision in a stressful situation. They will not have to contextualize it within all of their other decisions, they don't have to formulate an algebra. They just have to make a decision they believe people can live with. And then they will have to live with the consequences. It creates conflict. You can't have a conflict with the machine. It's just there, following rules. It would be like having a conflict with the beurocrats at the DMV, there's no point. They didn't make a decision, they just execute on the rules as written. | |||||||||||||||||||||||||||||
|