Remix.run Logo
mentalgear 15 hours ago

This is how the future of "AI" has to look like: Fully-traceable inferences steps, that can be inspected & adjusted if needed.

Without this, I don't see how we (the general population) can maintain any control - or even understanding - of these larger and more opaque becoming LLM-based long-inference "AI" systems.

Without transparency, Big Tech, autocrats and eventually the "AI" itself (whether "self-aware" or not) will do whatever they like with us.

moffkalast 10 hours ago | parent | next [-]

You've answered your own question as to why many people will want this approach gone entirely.

Imustaskforhelp 2 hours ago | parent [-]

I always really like answers like yours as they are clever and in my opinion maybe a bit true as well

I think that tho there are a lot of things public can do and maybe raising awareness about these stuff could be great as well.

turnsout 11 hours ago | parent | prev | next [-]

I agree transparency is great. But making the response inspectable and adjustable is a huge UI/UX challenge. It's good to see people take a stab at it. I hope there's a lot more iteration in this area, because there's still a long way to go.

lionkor 10 hours ago | parent [-]

If I give you tens of billions of dollars, like, wired to your personal bank account, do you think you could figure it out given a decade or two?

turnsout 8 hours ago | parent [-]

Yes! I think that would do it. But is anyone out there is committing tens of billions of dollars to traceable AI?

SilverElfin 7 hours ago | parent | prev | next [-]

In the least, we need to know what training data goes into each AI model. Maybe there needs to be a third party company that does audits and provides transparency reports, so even with proprietary models, there are some checks and balances.

Blamklmo 14 hours ago | parent | prev | next [-]

[dead]

zapataband2 8 hours ago | parent | prev [-]

[dead]