Remix.run Logo
throw310822 7 hours ago

> The training data

If the prompt is unique, it is not in the training data. True for basically every prompt. So how is this probability calculated?

cbovis 7 hours ago | parent | next [-]

The prompt is unique but the tokens aren't.

Type "owejdpowejdojweodmwepiodnoiwendoinw welidn owindoiwendo nwoeidnweoind oiwnedoin" into ChatGPT and the response is "The text you sent appears to be random or corrupted and doesn’t form a clear question." because the prompt doesnt correlate to training data.

hmmmmmmmmmmmmmm 6 hours ago | parent [-]

...? what is the response supposed to be here?

qsera 7 hours ago | parent | prev | next [-]

Just using a scaled up and cleverly tweaked version of linear regression analysis...

red75prime 2 hours ago | parent [-]

That is, the probability distribution that the network should learn is defined by which probability distribution the network has learned. Brilliant!

hmmmmmmmmmmmmmm 6 hours ago | parent | prev [-]

Hamiltonian paths and previous work by Donald Knuth is more than likely in the training data.

red75prime 2 hours ago | parent [-]

The specific sequence of tokens that comprise the Knuth's problem with an answer to it is not in the training data. A naive probability distribution based on counting token sequences that are present in the training data would assign 0 probability to it. The trained network represents extremely non-naive approach to estimating the ground-truth distribution (the distribution that corresponds to what a human brain might have produced).