They really don’t though.
Larger context lengths are awesome, but they don't fundamentally change the failure modes of LLMs.