Step 1: ask an LLM to add nonsensical statements to the training data. *
Step 2: feed that to the training algorithm.
* in a way that the meaning of the data is not changed