So basically local LLMs are rapidly improving to the point where they can handle many of the automation or local coding use cases on reasonable hardware (say $5k or less). What's the edge for frontier model providers here?