Remix.run Logo
amelius 3 hours ago

> <|turn>user Hi there!<turn|><|turn>model Hi there, how can I help you today <turn|>

Good lord, they managed to invent a format that is even less readable than XML.

aktuel 3 hours ago | parent [-]

It is not supposed to be readable by humans. You rarely have to look at it. It is designed to not get confused with the actual content, where the content can be any random text from the internet. For that, you have to use a format that is not used anywhere else.

stavros 2 hours ago | parent [-]

Are these markers actual text? Or does the model "see" one token per marker?

badsectoracula 2 hours ago | parent | next [-]

AFAIK[0] they are (usually) so-called "special" tokens - e.g <|turn> is token id 105 for the vocabulary Gemma4 uses. When you are tokenizing text you can either tokenize the "<|turn>" as a single token (105) or as a series of other tokens (236820, 236909, 887 and 236813 for the "<", "|", "turn" and ">" tokens) with the idea being that the model will treat "105" as the actual separator but can also use "<|turn>" as part of the content.

Though using text-based templates make this a bit tricky regardless. AFAIK llama.cpp tries to avoid this confusion by having their Jinja2 implementation use a custom string type that contains metadata about where characters "come from" so that it can distinguish between special tokens (which would be part of the Jinja2 template) and content (which would be either generated text or text given in by the user) - i.e. even if a string is "<|turn>" the metadata would be used to tell if it is meant to be tokenized as a special token or as a series of non-special tokens.

[0] i might be wrong, this is based on my understanding by messing around with the llama.cpp code, but i never implemented an LLM inference or training engine

bashbjorn 2 hours ago | parent | prev [-]

The model sees one token per marker - but the overlap with ingested actual text is still relevant, because the tokenizer will ingest regular text, where it will turn "<|turn>" into the same token.

For this reason, it can be tricky to work on the runtime for a model with the same model. This really feels like an accidental problem, but I'm not sure if it's really solvable without abandoning the text representations altogether (and the jinja abstraction along with it).

lifis 2 hours ago | parent [-]

Surely one can just escape the input, no? Seems astonishing if someone isn't doing that

bashbjorn 6 minutes ago | parent | next [-]

You're right, there must be a good and simple way to do it.

Obviously the prefix-with-backslash convention won't do it. The escaping system could be something like inserting a character on the second position in the text repr, and reversing that on output too if it matches an escaped known special token.

Changing the vocab on the fly requires tokenizing things separately, breaking the chat template.

Anecdotally, even claude code has an anneurism sometimes when listing special tokens. Idk exactly what claude's <eos> token is, but I'm fairly sure I've seen it stop generation when it tried to generate it before.

I should also say that I've (clearly) not thought about this deeply. There should be a simpler way to do it.

maxbond an hour ago | parent | prev [-]

The escape algorithm here is very simple, you remove special tokens from the runtime tokenizer's vocabulary so that it's forced to encode them as multiple non-special tokens. (That doesn't actually mean the LLM won't treat them as special tokens though, so this isn't sufficient on it's own.)

bashbjorn 4 minutes ago | parent [-]

Cool technique, but I'm not sure I'd call it simple.

Doing this means that you can't just tokenize the string output of the chat template as one big string. You might need to tokenize things separately, and combine them after.