▲ | somenameforme 5 days ago | |
In general I agree with you regarding the weakness of the paper, but not the skepticism towards its outcome. Our bodies naturally adjust to what we do. Do things and your body reinforces that enabling you do even more advanced versions of those things. Don't do things and your skill or muscle in such tends to atrophy over time. Asking LLMs to (as in this case) write an essay is always going to be orders of magnitude easier than actually writing an essay. And so it seems fairly self evident that using LLMs to write essays would gradually degrade your own ability to do so. I mean it's possible that this, for some reason, might not be true, but that would be quite surprising. | ||
▲ | tomrod 5 days ago | parent [-] | |
Ever read books in the Bobiverse? They provide an pretty functional cognitive model for how human interfaces with tooling like AI will probably work (even though it is fiction) -- lower level actions are pushed into autonomous regions until a certain deviancy threshold is achieved. Much like breathing -- you don't typically think about breathing until it becomes a problem (choking, underwater, etc.) and then it very much hits the high level of the brain. What is reported as cognitive decline in the paper might very well be cognitive decline. It could also be alternative routing focused on higher abstractions, which we interpret as cognitive decline because the effect is new. I share your concern, for the record, that people become too attached to LLMs for generation of creative work. However, I will say it can absolutely be used to unblock and push more through. The quality versus quantity balance definitely needs consideration (which I think they are actually capturing vs. cognitive decline) -- the real question to me is whether an individual's production possibility frontier is increased (which means more value per person -- a win!), partially negative in impact (use with caution), or decreased overall (a major loss). Cognitive decline points to the latter. |