▲ | TomaszZielinski a day ago | |
I've found that LLMs help me find the boundary. First I ask a bunch of questions in a chaotic manner. I explore the topic, check references, etc. At some point the dots start to naturally connect. Then I start paraphrasing what I learned and the LLM either confirms it or clarifies where my understanding falls short. At some point I feel naturally satisfied with the level of understanding that I have—it's likely because there's no „one more page of Google search results” trap there. One thing to watch out is the „GOAT trap”—for instance, the default ChatGPT tends to reply with sth like: „You are the GOAT and your understanding and insight are unmatched. Let’s just clarify a few minor points”, followed by a destruction of my line of thinking, but worded in such a way that you're happy for the upcoming trip :). So you need a system prompt like „be very blunt”. |