Remix.run Logo
ckrapu 2 days ago

From an AI risk perspective, one of the most wonderful things about LLMs is that their chain of thought can be entirely read off their outputs by humans with no specific training.

This is a risky step backwards, and for apparently little gain.

rotexo 7 hours ago | parent [-]

Can the E-cache and KV-cache be supplied to the model to produce the natural language output that would have been fed into the next model of the chain, if it were not for DroidSpeak? If so, doesn’t seem to materially change the explainability of the system.