Remix.run Logo
tarr11 3 hours ago

The cool thing about the current generation of AI tools is how easy it is to uncover bias or an agenda in an article like this.

paste the verge article text into your favorite AI tool and ask for an analysis.

Make sure to ask it to read the source Gallup data that this article leans on and compare the conclusions drawn.

sonofhans 2 hours ago | parent | next [-]

The cool thing about critical reasoning is how easy it is to uncover bias or agenda in an article like this.

I suspect that as you rely more on a robot for this your own skills will atrophy.

tarr11 an hour ago | parent [-]

This article is filled with emotional triggers designed to drive engagement. Even the title. It can be hard to separate those things from objective facts.

Putting an llm in front of it helps me focus on the facts.

There are also too many things to read. My default before llms would have been to ignore this article.

At least now I learned some things (mostly about the Gallup poll which had source data)

I do think some people will outsource critical thinking to llms - but it also helps amplify critical thinking by doing a lot of the filtering and organizing and let me focus on the things i think are important.

nozzlegear 22 minutes ago | parent [-]

> This article is filled with emotional triggers designed to drive engagement. Even the title. It can be hard to separate those things from objective facts.

> Putting an llm in front of it helps me focus on the facts.

This argument reminds me of one of Ted Chiang's short stories about "lookism," which (iirc) was a natural preference for people to prefer people who are attractive. In the story, a new technology was developed that could interact with a person's brain to "turn off" their lookism and instead just consider what a person brings to the table without your brain factoring in your own attraction to them.

I won't spoil the story, but a little arms race develops in the technology to "turn off" natural human reactions to things like attraction, emotion in speech, etc., so that users won't be swayed by them in advertising, political campaigns, anything that could possibly have an agenda. People using the technology are described as highly autistic – unable to perceive any human emotion – so that they're able to interpret just a person's intent and not be manipulated by their underlying motivations.

It's an interesting story, your use of LLMs to cut out the "emotional triggers" from an article and get just the "objective facts" reminds me of that.

https://www.goodreads.com/book/show/58050245

bccdee 2 hours ago | parent | prev | next [-]

And then how do you uncover bias in your chatbot? Do you ask it to analyze its own analysis? For that matter, what about the bias in your prompt, which LLMs tend to accept uncritically? Do your own preconceived opinions bias you against the argument made in the article? Are you using a chatbot to think critically about the article, or to avoid thinking critically about your own beliefs?

> At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.

Perhaps you should take a cue from these surveyees and do your own thinking.

not_wyoming 2 hours ago | parent [-]

I actually did this - I plugged The Verge article into Claude and got the following critique of what biases are there:

> The article accurately cites real Gallup data but selectively omits findings that complicate its "backlash" narrative — most notably that curiosity is Gen Z's single most common emotion toward AI, and that daily users remain substantially more hopeful and excited than the aggregate figures suggest. The 79% "laziness" concern and declining hope figures are presented as evidence of generational rejection, when the researchers themselves describe what they found as "deep ambivalence." *In short, the article uses real numbers to tell a cleaner, more oppositional story than the underlying polling actually supports.*

Then I then put that Claude critique back into Claude and asked it to analyze the critique for bias and agendas and got this:

> The critique accurately catches real flaws in The Verge article — particularly the omission of "curiosity" as Gen Z's top emotion and the failure to distinguish between heavy users (who are more positive) and non-users (who drive most of the negativity). However, *the critique has its own directional bias, consistently framing every correction in ways that soften the negative trend, while ignoring data that cuts the other way — like the sharp positivity decline even among daily users, and the near-majority of Gen Z workers who see AI as a net negative in the workplace. *Both pieces are selectively using the same real data to tell opposite stories; the Gallup findings themselves are more nuanced and more negative than the critique allows.*

So according to Claude, Claude is biased in how it describes The Verge as biased.

LLMs are breakthrough technologies. The AI products we have today are SaaS products built by companies doing everything they can to find people who will pay for them. Very, very different things.

sailfast an hour ago | parent | next [-]

> LLMs are breakthrough technologies. The AI products we have today are SaaS products built by companies doing everything they can to find people who will pay for them. Very, very different things.

THIS. ALL. DAY.

catcowcostume an hour ago | parent | prev [-]

So basically sycophantic LLM behavior. Nothing new then

ericjmorey 2 hours ago | parent | prev | next [-]

Would this show bias of Gallup or of Verge or of the Ai training data? How would you determine which?

not_wyoming 2 hours ago | parent | prev | next [-]

> The cool thing about the current generation of AI tools is how easy it is to uncover bias or an agenda in an article like this.

This is only true if you assume that an AI tool is itself unbiased. I'm not sure how anyone can earnestly believe AI tools are unbiased after Grok's MechaHitler episode [0], unless they just aren't giving it much critical thought.

0 - https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...

catcowcostume an hour ago | parent | prev [-]

So are you outsourcing your thinking? You just prove the article's point