▲ | bugglebeetle 3 days ago | |
Yeah, so it’d be interesting to see if provided the correct context/your understanding of its error pattern, it can accomplish this. One thing you learn quickly about working with LLMs if they have these kind of baked-in biases, some of which are very fixed and tied to their very limited ability to engage in novel reasoning (cc François Chollet), while others are far more loosely held/correctable. If it sticks with the errant patten, even when provided the proper context, it probably isn’t something an off-the-shelf model can handle. |