Remix.run Logo
AJ007 10 hours ago

This whole thing is silly, LLMs can automate reference validation.

If someone is a lawyer, accountant, doctor, teacher, surgeon, engineer etc, and is regurgitating answers that were pumped out with with GPT-5-extra-low or whatever mediocre throttled model they are using, they should just be fired and de-credentialed. Right now this is easy.

The real problem is ahead: 99.999% of future content that exists will be made using generative AI. For many people using Facebook, Instagram, TikTok, or some other non-sequential, engagement weighted feed, 50%+ of the content they consume today is fake. As that stuff spreads in to modern culture it's going to be an endless battle to keep it out of stuff that should not be publishing fake content (e.g. the New York Times or Wall Street Journal; excluding scientific journals who seem to abandoned validation and basic statistics a long time ago.)

Much of the future value and profit margins might just be in valid data?

raincole 9 hours ago | parent | next [-]

> Right now this is easy.

Easy? In the US you need house impeachment to fire a judge. In some countries judges are completely immune unless they are sentenced for crimes.

mminer237 6 hours ago | parent | next [-]

To fire a federal judge. Local judges, which are the vast majority, can be fired by their colleagues or replaced in elections.

voidUpdate 9 hours ago | parent | prev [-]

Do you need impeachment to fire a lawyer, accountant, doctor, teacher, surgeon or engineer?

raincole 9 hours ago | parent [-]

Nope, and the article is about a judge. What's the point to incentive lawyers to carefully verify their references when they know the judge has no incentive to read them and can just make shit up anyway?

miltonlost 9 hours ago | parent | prev [-]

> This whole thing is silly, LLMs can automate reference validation.

Can they though with 100% accuracy and no hallucinations? Wouldn't you still need to validate that they validated correctly?