LLMs have mostly made this trivial, plus you have the added benefit of being able to iteratively dump out more each run.