| ▲ | razodactyl 6 hours ago | |
If anyone's had 4.7 update any documents so far - notice how concise it is at getting straight to the point. It rewrote some of my existing documentation (using Windsurf as the harness), not sure I liked the decrease in verbosity (removed columns and combined / compressed concepts) but it makes sense in respect to the model outputting less to save cost. To me this seems more that it's trained to be concise by default which I guess can be countered with preference instructions if required. What's interesting to me is that they're using a new tokeniser. Does it mean they trained a new model from scratch? Used an existing model and further trained it with a swapped out tokeniser? The looped model research / speculation is also quite interesting - if done right there's significant speed up / resource savings. | ||
| ▲ | andai 5 hours ago | parent [-] | |
Interesting. In conversational use, it's noticeably more verbose. | ||