| ▲ | layer8 6 hours ago | |
Breaking down and describing things in sufficient detail can be one way to ensure that the LLM can match it to its implicit knowledge. It still depends on what you’re trying to do in how much detail you have to spell out things to the LLM. It’s almost a tautology that there’s always some level of description that the LLM will be able to take up. | ||
| ▲ | embedding-shape 4 hours ago | parent [-] | |
Well, not just breaking down the task at hand, but also how you instruct it to do any work. Just saying "Do X" will give you very different results from "Do X, ensure Y, then verify with Z", regardless of what tasks you're asking it to do. That's also how you can get the LLM to do stuff outside of the training data in a reasonably good way, by not just including the _what_ in the prompt, but also the _how_. | ||