It is irrelevant for the point being made: LLM does exactly the same thing in both cases - generates statistically plausible text, based on examples it was exposed during training.