| ▲ | bob1029 7 hours ago | |
Title for the back of the class: "Prompts sometimes return null" I would be very cautious to attribute any of this to black box LLM weight matrices. Models like GPT and Opus are more than just a single model. These products rake your prompt over the coals a few times before responding now. Telling the model to return "nothing" is very likely to perform to expectation with these extra layers. | ||
| ▲ | frde_me 33 minutes ago | parent | next [-] | |
Out of curiosity, are there any sources to there being a significant amount of other steps before being fed into the weights Security guards / ... are the obvious ones, but do you mean they have branching early on to shortcut certain prompts? | ||
| ▲ | tiku 6 hours ago | parent | prev [-] | |
Thanks, I was already distracted after the first sentence, hoping there would be a good explanation. | ||