Remix.run Logo
nicbou 2 hours ago

I get that issue constantly. I somehow can't get any LLM to ask me clarifying questions before spitting out a wall of text with incorrect assumptions. I find it particularly frustrating.

ash_091 14 minutes ago | parent | next [-]

"If you're unsure, ask. Don't guess." in prompts makes a huge difference, imo.

Pxtl 2 hours ago | parent | prev | next [-]

In general spitting out a scrollbar of text when asked a simple question that you've misunderstood is not, in any real sense, a "chat".

mk89 an hour ago | parent | prev [-]

The way I see it is that long game is to have agents in your life that memorize and understand your routine, facts, more and more. Imagine having an agent that knows about cars, and more specifically your car, when the checkups are due, when you washed it last time, etc., another one that knows more about your hobbies, another that knows more about your XYZ etc.

The more specific they are, the more accurate they typically are.

viking123 34 minutes ago | parent [-]

Do really understand deeply and in great amount I feel we would need models with changing weights and everyone would have their own so they could truly adjust to the user. Now we have have chunk of context that it may or may not use properly if it gets too long. But then again, how do we prevent it learning the wrong things if the weights are adjusting.