Remix.run Logo
deadbabe 3 days ago

This is a little anthropomorphic. The faster option is to tell it to give you the full content of an ideal context for what you’re doing and adjust or expand as necessary. Less back and forth.

7thpower 2 days ago | parent | next [-]

It’s not though, one of the key gaps right now is that people do not provide enough direction on the tradeoffs they want to make. Generally LLMs will not ask you about them, they will just go off and build. But if you have them ask, they will often come back with important questions about things you did not specify.

MrDunham 2 days ago | parent | next [-]

This is the correct answer. I like to go one step further than the root comment:

Nearly all of my "agents" are required to ask at least three clarifying questions before they're allowed to do anything (code, write a PRD, write an email newsletter, etc)

Force it to ask one at a time and it's event better, though not as step-function VS if it went off your initial ask.

I think the reason is exactly what you state @7thpower: it takes a lot of thinking to really provide enough context and direction to an LLM, especially (in my opinion) because they're so cheap and require no social capital cost (vs asking a colleague / employee—where if you have them work for a week just to throw away all their work it's a very non-zero cost).

iaw 2 days ago | parent [-]

My routine is:

Prompt 1: <define task> Do not write any code yet. Ask any questions you need for clarification now.

Prompt 2: <answer questions> Do not write any code yet. What additional questions do you have?

Reiterate until questions become unimportant.

deadbabe 2 days ago | parent | prev [-]

They don’t know what to ask. They only assemble questions according to training data.

7thpower 2 days ago | parent | next [-]

It seems like you are trying to steer toward a different point or topic.

In the course of my work, I have found they ask valuable clarifying questions. I don’t care how they do it.

fuzzzerd 2 days ago | parent | prev [-]

While true, the questions are all points where the LLM would have "assumed" an answer and by asking you get to point in the right direction instead.

manmal 3 days ago | parent | prev [-]

Can you give me the full content of the ideal context of what you mean here?

rzzzt 3 days ago | parent [-]

Certainly!