Exactly where I figured it was going. LLMs generate too much "magic" unmaintainable code, so when it breaks just hope the next model is out and start all over