Remix.run Logo
amelius 4 days ago

Step 1: ask the LLM to strip the nonsensical parts from the problem statement.

Step 2: feed that to the LLM.

lenerdenator 4 days ago | parent | next [-]

Difficulty: on the internet, cats are always relevant.

mcswell 4 days ago | parent | prev | next [-]

How does the LLM know what the "nonsensical" (I think you meant irrelevant) parts are? It requires world knowledge to know. And in any case, I'm pretty sure the AI is built to think that all the parts of a query are relevant.

im3w1l 4 days ago | parent [-]

Well how is a tricky question. But if you try it, you will see that it can indeed do it.

aflag 4 days ago | parent | prev | next [-]

You may be feeding "Cats sleep for most of their lives." in step 2

nitwit005 4 days ago | parent | prev | next [-]

Step 3: Become suspicious that if step 1 was a good idea, OpenAI would have implemented it on their own.

im3w1l 4 days ago | parent [-]

Well chatgpt doesn't know if there will be a follow-up question relying on the "irrelevant" information. So in general it can't remove it. Or at least it would require some more complexity to dynamically decide what is relevant and not over the lifetime of the conversation.

amelius 4 days ago | parent | prev [-]

Step 1: ask an LLM to add nonsensical statements to the training data. *

Step 2: feed that to the training algorithm.

* in a way that the meaning of the data is not changed