Remix.run Logo
easymuffin 2 hours ago

Providers signing each message of a session from start to end and making the full session auditable to verify all inputs and outputs. Any prompts injected by humans would be visible. I’m not even sure why this isn’t a thing yet (maybe it is I never looked it up). Especially when LLMs are used for scientific work I’d expect this to be used to make at least LLM chats replicable.

simonw 2 hours ago | parent [-]

Which providers do you mean, OpenAI and Anthropic?

There's a little hint of this right now in that the "reasoning" traces that come back from the JSON are signed and sometimes obfuscated with only the encrypted chunk visible to the end user.

It would actually be pretty neat if you could request signed LLM outputs and they had a tool for confirming those signatures against the original prompts. I don't know that there's a pressing commercial argument for them doing this though.

easymuffin 26 minutes ago | parent | next [-]

Yeah, I was thinking about those major providers, or basically any LLM API provider. I’ve heard about the reasoning traces, and I guess I know why parts are obfuscated, but I think they could still offer an option to verify the integrity of a chat from start to end, so any claims like „AI came up with this“ as claimed so often in context of moltbook could easily be verified/dismissed. Commercial argument would exactly be the ability to verify a full chat, this would have prevented the whole moltbook fiasco IMO (the claims at least, not the security issues lol). I really like the session export feature from Pi, something like that signed by the provider and you could fully verify the chat session, all human messages and LLM messages.

an hour ago | parent | prev [-]
[deleted]