Not sure if it's a joke, but I don't think LLM is a bijective function.
If you had all the token probabilities it would be bijective. There was a post about this here some time back.
Kind of, LLMs still use randomness when selecting tokens, so the same input can lead to multiple different outputs.