▲ | Wowfunhappy 4 days ago | |||||||
> On August 25, we deployed a misconfiguration to the Claude API TPU servers that caused an error during token generation. An issue caused by a runtime performance optimization occasionally assigned a high probability to tokens that should rarely be produced given the context, for example producing Thai or Chinese characters in response to English prompts, or producing obvious syntax errors in code. A small subset of users that asked a question in English might have seen "สวัสดี" in the middle of the response, for example. Can anyone explain to a layperson how this sort of thing is even possible for an LLM? For normal code, of course stupid bugs happen all the time. You accidentally introduce an off-by-one error in a conditional, for example, or add an extra `goto fail`. But LLMs aren't written by humans! Models are trained by automated programs over a period of many months across unfathomably massive data centers. How would a human introduce a bug like the one described in TFA? | ||||||||
▲ | Voloskaya 4 days ago | parent | next [-] | |||||||
LLMs are still executed by code written by humans. In this case, the model ultimately give you a probability distribution over each (~200k) tokens in the vocabulary. It's then up to you to decide how you want to sample the next token, you could for example just always sample the most likely, or to make the output more creative, you can sample randomly from the top-k tokens. This top-k sampling, to make it efficient, is written in XLA and compiled to run directly as a kernel, there was a bug in that kernel, which presumably led to tokens outside of the top-k window be select from times to times. | ||||||||
▲ | Centigonal 4 days ago | parent | prev | next [-] | |||||||
LLMs produce a probability distribution for what the next token might be. How you pick the actual word that is printed next from that probability distribution is by using a sampling approach[1]. If your sampling approach is "select the next word randomly from among the top 4 possibilities" and you flip a > sign, you could end up with the behavior described in the OP. [1] Here is an example of two common approaches: https://www.reddit.com/r/AIDungeon/comments/1eppgyq/can_some... | ||||||||
| ||||||||
▲ | ashdksnndck 4 days ago | parent | prev | next [-] | |||||||
There are many layers of human-written code in between you and the weights. | ||||||||
▲ | blackqueeriroh 4 days ago | parent | prev | next [-] | |||||||
Simple answer: there are two separate processes here, training and inference. As you discuss, training happens over a long period of time in a (mostly) hands-off fashion once it starts. But inference? That’s a separate process which uses the trained model to generate responses, and it’s a runtime process - send a prompt, inference runs, response comes back. That’s a whole separate software stack, and one that is constantly being updated to improve performance. It’s in the inference process where these issues were produced. | ||||||||
▲ | jldugger 4 days ago | parent | prev [-] | |||||||
The AI kernels are floating point, so it's possible to do some unintuitive math that ends up negative even though it wouldn't be in the Real domain. I wouldn't be surprised if checking for overflow state is disabled for perf reasons and the negative simply becomes really big -- like asking for the -1st item in an array and getting the last. |