| ▲ | nprateem 6 hours ago | |
Lol and that was enough for you? You really think they can test every single prompt before release to see if it regurgitates stuff? Did this exec work in sales too :-D | ||
| ▲ | TeMPOraL 3 hours ago | parent | next [-] | |
They have a clear incentive to do exactly as said - regurgitation is a problem, because it indicates the model failed to learn from the data, and merely memorized it. | ||
| ▲ | simonw 4 hours ago | parent | prev [-] | |
I think they can run benchmarks to see how likely it is for prompts to return exact copies of their training data and use those benchmarks to help tune their training procedures. | ||