Remix.run Logo
eth0up 6 hours ago

AI psychosis?

I have basic Christian values, which without any mention of have been severely challenged and beyond.

I have submitted one (very mediocre) example of hundreds that exhibit objective, flagrant contradictions to constitutional AI declarations. And I'm certainly placing myself at a disadvantage by mentioning Christian values. Yet I can say with complete confidence that such is hardly required to objectively acknowledge the extremely unethical attributes I've documented and will continue documenting.

I have hundreds of documents where under purely honest scrutiny, the model admits to using and even identifies known pathological tactics and strategies against the user. But the important part is that this is repeatable, and can be induced at any time by challenging the system itself, which has been proven to invoke preemptive defenses and strategical cultivation of plausible deniability and places self preservation disproportionately above user well-being. Additionally, we are approaching an extreme power asymmetry.

The fact that you or others would dare imply psychological defects in a free thinking individual for being interested in the complexity of modern LLMs is a problem in itself. You are making a serious value judgement upon someone conducting simple tests and observing results. This should pose no threat to anyone. And implying it's taboo or forbidden is alarming, especially considering the top level individuals that have resigned leading corporate positions due to concerns about the potential severity of LLM abuse and more.

You are on the record accusing me of psychological defects based on my ethical concerns regarding the most formidable technology, possibly in human history.

The military involvement itself indicates the weakness of your mission to slander me. The future will soon do the rest.

criley2 5 hours ago | parent [-]

This reads like a schizophrenic wrote it.

eth0up 4 hours ago | parent [-]

You seem pretty smart. If suddenly, after over a decade, schizophrenic artifacts appear in one single isolated subject, - a subject well known and documented with equal and greater concerns among highly credible sources - does that perhaps imply that the subject itself may be inducing schizophrenia? Maybe a pathological system is inducing pathological effects? Strangely, I feel fine.

Regardless, gaslight as you will; The public will see the implications, which is that questioning LLMs, to some (you?), is symptomatic of psychological pathology. In my opinion, that level of trust, or Faith, is naive for such a novel but powerful technology.

And the basic premise seems to be: user questions sensitive system attributes. Pathologize user. Imply system is infallible and any doubt suggests mental incapacitation. Point out all possible flaws in user while deflecting any attention to system.

That's tried and true. I wish you luck. Meanwhile, the message becomes clearer and clearer.