▲ | amelius 4 days ago | |||||||
Step 1: ask the LLM to strip the nonsensical parts from the problem statement. Step 2: feed that to the LLM. | ||||||||
▲ | lenerdenator 4 days ago | parent | next [-] | |||||||
Difficulty: on the internet, cats are always relevant. | ||||||||
▲ | mcswell 4 days ago | parent | prev | next [-] | |||||||
How does the LLM know what the "nonsensical" (I think you meant irrelevant) parts are? It requires world knowledge to know. And in any case, I'm pretty sure the AI is built to think that all the parts of a query are relevant. | ||||||||
| ||||||||
▲ | aflag 4 days ago | parent | prev | next [-] | |||||||
You may be feeding "Cats sleep for most of their lives." in step 2 | ||||||||
▲ | nitwit005 4 days ago | parent | prev | next [-] | |||||||
Step 3: Become suspicious that if step 1 was a good idea, OpenAI would have implemented it on their own. | ||||||||
| ||||||||
▲ | amelius 4 days ago | parent | prev [-] | |||||||
Step 1: ask an LLM to add nonsensical statements to the training data. * Step 2: feed that to the training algorithm. * in a way that the meaning of the data is not changed |