Remix.run Logo
ljm 8 hours ago

I don't see how "create an abstraction before attempting to solve the problem" will ever work as a decent prompt when you are not even steering it towards specifics.

If you gave this exact prompt to a senior engineer I would expect them to throw it back and ask wtf you actually want.

LLMs are not mind readers.

balls187 3 hours ago | parent | next [-]

Interesting.

I think it's because AI Models have learned that we prefer answers that are confident sounding, and not to pester us with questions before giving us an answer.

That is, follow my prompt, and don't bother me about it.

Because if I am coming to an AI Agent to do something, it's because I'd rather be doing something else.

pitched 6 hours ago | parent | prev [-]

If I already know the problem space very well, we can tailor a skill that will help solve the problem exactly how I already know I want it to be solved.