Remix.run Logo
Every American interacting with chatbot would need to upload a government ID(reclaimthenet.org)
39 points by g42gregory a day ago | 8 comments
oompydoompy74 a day ago | parent | next [-]

Every passing week I self host more and more services and tools inside of my own network to protect my family from this wave of censorship and surveillance. No ID needed for llama.cpp.

portsentinel a day ago | parent | prev | next [-]

Too much information for AI nowadays may be someday AI will control us and we will fight for our existence soon

croes a day ago | parent | prev | next [-]

> My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X.

How about stopping AI to tell anybody to kill themselves?

Doesn’t need an ID to do that

sikozu a day ago | parent | next [-]

I was under the impression that all the mainstream models already do this. I'm sure you could probably download some obscure, uncensored and unhinged model that says anything you want, but that isn't what 99% of people will be interacting with.

Not strictly relevant but I also have concerns about AI psychosis which seems related a little bit here, otherwise they'd realise it's a computer program and can't make you do anything.

SapporoChris 11 hours ago | parent | prev | next [-]

I think it would be better if children learned critical thinking. It would help defend against any unsound conclusions proposed AI or any other sources.

https://en.wikipedia.org/wiki/Critical_thinking

croes 11 hours ago | parent [-]

Easier said than done.

How to distinguish facts from fake with current AI?

Especially in the future when every source has AI content.

estimator7292 11 hours ago | parent | prev [-]

Obviously, they already tried.

Problem is that there simply is not a way to do this reliably. The models are all stochastic processes and the only real levers model designers have to pull involve asking the model to pretty please not do something bad.

And then it turns out that it's pretty easy to also ask models to pretty please ignore previous instructions. You can also accidentally get a model into a state where it ignores system prompt guidelines.

There is not a big #ifdef DONT_TELL_USER_TO_DIE switch in the code. Nobody truly understands how the models work under the hood and there simply is not a way to enforce 100% that a model cannot do something.

black_13 a day ago | parent | prev [-]

[dead]