| ▲ | easymuffin 2 hours ago | |
Yeah, I was thinking about those major providers, or basically any LLM API provider. I’ve heard about the reasoning traces, and I guess I know why parts are obfuscated, but I think they could still offer an option to verify the integrity of a chat from start to end, so any claims like „AI came up with this“ as claimed so often in context of moltbook could easily be verified/dismissed. Commercial argument would exactly be the ability to verify a full chat, this would have prevented the whole moltbook fiasco IMO (the claims at least, not the security issues lol). I really like the session export feature from Pi, something like that signed by the provider and you could fully verify the chat session, all human messages and LLM messages. | ||