Remix.run Logo
gehwartzen 5 hours ago

Well we teach kids not to yell “Fire!” In a crowded theatre or “N***!“ at their neighbor. We also teach our industrial machines to distinguish between fingers and bolts, our cars to not say “make a left turn now” when on a bridge, etc

rudhdb773b 5 hours ago | parent [-]

The critical point is who the "we" is.

Is "we" the parents teaching their children their own unique values, or is the "we" a government or corporation forcing one set of values on all children.

Why not encourage the users of AI to use a Safety.md (populated with some reasonable but optional defaults)?

dminik 4 hours ago | parent [-]

There's nothing a meaningless document can do when the AI is not aligned in the first place.

lupire 2 hours ago | parent [-]

"alignment" is the computer version for (philosophical not medical) "consciousness", a totally subjective, immeasurable concept.

dminik 33 minutes ago | parent [-]

I think you have a misunderstanding of the term alignment. Really, you could replace "aligned" with "working" and "misaligned" with "broken".

A washing machine has one goal, to wash your clothes. A washing machine that does not wash your clothes is broken.

An AI system has some goal. A target acquisition AI system might be tasked with picking out enemies and friendlies from a camera feed. A system that does so reliably is working (aligned) a system that doesn't is broken (misaligned). There's no moral or philosophical angle necessary if your goal doesn't already include that. Aligned doesn't mean good and misaligned doesn't mean evil.

The problem comes when your goal includes moral, ethical and philosophical judgements.