Remix.run Logo
fallinditch 10 hours ago

Yes it sounds like a bold statement. I called Gemini out on that and it admitted that it over-egged its confidence on that assertion.

But presumably the LLMs do have some knowledge about how they are used?

On further probing Gemini did give a plausible justification - in summary:

"Creation is easy. Selection is hard. In an era of infinite content, the "most successful" writer isn't the one who can produce the most; it's the one with the best taste. Using an LLM as a distillation machine allows a writer to iterate through their own ideas at 10x speed, discarding the "average" and keeping only the "potent."

didgeoridoo 10 hours ago | parent | next [-]

LLMs have no knowledge (really “knowledge-like weights and biases”) outside their training set and system prompt. That plausible justification is just that — a bunch of words that make sense when strung together. Whether you’d like to give that any epistemic weight is up to you.

nemomarx 10 hours ago | parent | prev [-]

Why would Gemini (the text model part) have that info? I'm sure that Google has some kinda analytics, or so on, but it wouldn't necessarily be part of training, the system prompt, or distillation directly.

shigawire 6 hours ago | parent [-]

The only plausible avenue I see is Gemini ingesting Google press releases about how cool their AI is.

Leave it to the reader to decide how informative that would be.