Remix.run Logo
rao-v 4 hours ago

I’d really like to see this optimized for the 50-120B parameter open source models that are local viable (gpt-oss-120b, qwen3-80b-3a etc.).

For them I think it would be optimal to provide a tag per function and trust the llm to rewrite the function. As the article notes full reproduction is generally more reliable than edited for short code.

The token and attention overhead from a per line hash I suspect limits this approach for smaller models