| ▲ | mentalgear 15 hours ago | ||||||||||||||||
This is how the future of "AI" has to look like: Fully-traceable inferences steps, that can be inspected & adjusted if needed. Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems. Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us. | |||||||||||||||||
| ▲ | moffkalast 10 hours ago | parent | next [-] | ||||||||||||||||
You've answered your own question as to why many people will want this approach gone entirely. | |||||||||||||||||
| |||||||||||||||||
| ▲ | turnsout 11 hours ago | parent | prev | next [-] | ||||||||||||||||
I agree transparency is great. But making the response inspectable and adjustable is a huge UI/UX challenge. It's good to see people take a stab at it. I hope there's a lot more iteration in this area, because there's still a long way to go. | |||||||||||||||||
| |||||||||||||||||
| ▲ | SilverElfin 7 hours ago | parent | prev | next [-] | ||||||||||||||||
In the least, we need to know what training data goes into each AI model. Maybe there needs to be a third party company that does audits and provides transparency reports, so even with proprietary models, there are some checks and balances. | |||||||||||||||||
| ▲ | Blamklmo 14 hours ago | parent | prev | next [-] | ||||||||||||||||
[dead] | |||||||||||||||||
| ▲ | zapataband2 8 hours ago | parent | prev [-] | ||||||||||||||||
[dead] | |||||||||||||||||