There is a surprising amount of code needed in each of the inference frameworks (LM Studio, llama.cpp, etc) to support each new model release. For example to format the input in the right way using a chat template, to parse the output properly with the model-specific tokens the model provider decided to standardize on for their model, and more.
This particular instance was a fix to the output parsing [1] in LM Studio, described like this:
"Adds value type parsers that use <|\"|> as string delimiters instead of JSON's double quotes, and disables json-to-schema conversion for these types."
[1]: https://github.com/ggml-org/llama.cpp/pull/21326/commits/a50...
edit: formatting