Remix.run Logo
XJ6w9dTdM 4 hours ago

Exactly yes, that's what I was going to comment. You sometimes need to debug at every layers. All abstractions end up leaking in some way. It's often worth it, but it does not save us from the extra cognitive load and from learning the layers underneath.

I'm not necessarily against the approach shown here, reducing tokens for more efficient LLM generation; but if this catches on, humans will read and write it, will write debuggers and tooling for it, etc. It will definitely not be a perfectly hidden layer underneath.

But why not, for programming models, just select tokens that map concisely existing programming languages ? Would that not be as effective ?