Remix.run Logo
bunderbunder 12 hours ago

Humans flip-flop all the time. This is a major reason why the Meyers-Briggs Type Indicator does such a poor job of assigning the same person the same Meyers-Briggs type on successive tests.

It can be difficult to observe this fact in practice because, unlike for an LLM, you can't just ask a human the exact same question three times in five seconds and get three different answers, because unlike an LLM we have memory. But, as someone who works with human-labeled data, it's something I have to contend with on a daily basis. For the things I'm working on, if you give the same annotator the same thing to label two different times spaced far enough apart for them to forget that they have seen this thing before, the chance of them making the same call both times is only about 75%. If I do that with a prompted LLM anotator, I'm used to seeing more like 85%, and for some models it can be possible to get even better consistency than that with the right conditions and enough time spent fussing with the prompt.

I still prefer the human labels when I can afford them because LLM labeling has plenty of other problems. But being more flip-floppy than humans is not one that I have been able to empirically observe.

Alupis 11 hours ago | parent [-]

We're not talking about labeling data though - we're talking about understanding case law, statutory law, facts, balancing conflicting opinions, arguments, a judge's preconceived notions, experiences, beliefs etc. - many of which are assembled over an entire career.

Those things, I'd argue, are far less likely to change if you ask the same judge over and over. I think you can observe this in reality by considering people's political opinions - which can drift over time but typically remain similar for long durations (or a lifetime).

In real life, we usually don't ask the same judge to remake a ruling over and over - our closest analog is probably a judge's ruling/opinion history, which doesn't change nearly as much as an LLM's "opinion" on something. This is how we label SCOTUS Justices, for example, as "Originalist", etc.

Also, unlike a human, you can radically change an LLM's output by just ever-so-slightly altering the input. While humans aren't above changing their mind based on new facts, they are unlikely to take an opposite position just because you reworded your same argument.

bunderbunder 9 hours ago | parent [-]

I think that that gets back to the whole memory thing. A person is unlikely to forget those kinds of decisions.

But there has been research indicating, for example, that judges' rulings vary with the time of day. In a way that implies that, if it were possible to construct such an experiment, you might find that the same judge given the same case would rule in very different ways depending on whether you present it in the morning or in the afternoon. For example judges tend to hand out significantly harsher penalties toward the end of the work day.