| ▲ | dghlsakjg 3 hours ago | |||||||||||||||||||||||||||||||
> much more impervious to groupthink Citation very much needed. LLMs are arguably concentrated groupthink (albeit a different type than wiki editors - although I'm sure they are trained on that), and are incredibly prone to sycophancy. Establishing fact is hard enough with humans in the loop. Frankly, my counterargument is that we should be incredibly careful about how we use AI in sources of truth. We don't want articles written faster, we want them written better. I'm not sure AI is up to that task. | ||||||||||||||||||||||||||||||||
| ▲ | ajross 2 hours ago | parent [-] | |||||||||||||||||||||||||||||||
"Groupthink" informed by extremely broad training sets is more conventionally called "consensus", and that's what we want the LLM to reflect. "Groupthink", as the term is used by epistemologically isolated in-groups, actually means the opposite. The problem with the idea is that it looks symmetric, so if you yourself are stuck in groupthink, you fool yourself into think it's everyone else doing it instead. And, again, the solution for that is reasonable references grounded in informed consensus. (Whether that should be a curated encyclopedia or a LLM is a different argument.) | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||