| ▲ | easymuffin 2 hours ago | |||||||||||||
Providers signing each message of a session from start to end and making the full session auditable to verify all inputs and outputs. Any prompts injected by humans would be visible. I’m not even sure why this isn’t a thing yet (maybe it is I never looked it up). Especially when LLMs are used for scientific work I’d expect this to be used to make at least LLM chats replicable. | ||||||||||||||
| ▲ | simonw 2 hours ago | parent [-] | |||||||||||||
Which providers do you mean, OpenAI and Anthropic? There's a little hint of this right now in that the "reasoning" traces that come back from the JSON are signed and sometimes obfuscated with only the encrypted chunk visible to the end user. It would actually be pretty neat if you could request signed LLM outputs and they had a tool for confirming those signatures against the original prompts. I don't know that there's a pressing commercial argument for them doing this though. | ||||||||||||||
| ||||||||||||||