| ▲ | jbellis 7 hours ago | |
Yes, very similar results here (http://brokk.ai) We got lines-with-anchors working fine as a replacement strategy, the problem was that when you don't make the model echo what it's replacing, it's literally dumber at writing the replacement; we lost more in test failures + retries than we gained in faster outputs. Makes sense when you think about how powerful the "think before answering" principle is for LLMs, but it's still frustrating | ||