Remix.run Logo
concinds 20 hours ago

> To trust these AI models with decisions that impact our lives and livelihoods, we want the AI models’ opinions and beliefs to closely and reliably match with our opinions and beliefs.

No, I don't. It's a fun demo, but for the examples they give ("who gets a job, who gets a loan"), you have to run them on the actual task, gather a big sample size of their outputs and judgments, and measure them against well-defined objective criteria.

Who they would vote for is supremely irrelevant. If you want to assess a carpenter's competence you don't ask him whether he prefers cats or dogs.

godelski 16 hours ago | parent | next [-]

  > measure them against well-defined objective criteria.
If we had well-defined objective criteria then the alignment issue would effectively not exist
zuhsetaqi 9 hours ago | parent | prev | next [-]

> measure them against well-defined objective criteria

Who does define objective criteria?

shaky-carrousel 18 hours ago | parent | prev | next [-]

It's an awful demo. For a simple quiz, it repeatedly recomputes the same answers by making 27 calls to LLMs per step instead of caching results. It's as despicable as a live feed of baby seals drowning in crude oil; an almost perfect metaphor for needless, anti-environmental compute waste.

Herring 19 hours ago | parent | prev [-]

Psychological research (Carney et al 2008) suggests that liberals score higher on "Openness to Experience" (a Big Five personality trait). This trait correlates with a preference for novelty, ambiguity, and critical inquiry.

In a carpenter maybe that's not so important, yes. But if you're running a startup or you're in academia or if you're working with people from various countries, etc you might prefer someone who scores highly on openness.

binary132 2 hours ago | parent [-]

but an LLM is not a person. it’s a stochastic parrot. this crazy anthropomorphizing has got to stop

stevenalowe 4 minutes ago | parent [-]

Yeah ChatGPT says they really hate that!