Remix.run Logo
haswell a day ago

> It may destroy the research ability of the average person

In this “post-truth” era, I think this is deeply concerning and has the potential to far outweigh the benefits in the long run. People are already not good at critically evaluating information, and this is already leading to major real world impact.

I say this as someone who has personally found LLMs to be a learning multiplier. But then I see how many people treat these tools like some kind of oracle and start to worry.

OtherShrezzing a day ago | parent | next [-]

I have some incomplete thoughts that the rise in LLMs is in part driven by society's willingness to accept half-accuracies in a post-truth world.

If the societies of 2005 had the technologies of 2025, I expect OpenAI/Anthropic etc would have a much more challenging time convincing people that "convincingly incorrect" systems should send Nvidia to a $1tn+ valuation.

keiferski a day ago | parent | prev [-]

I guess in my experience the people that are that influenced by AI answers…weren’t exactly doing deep research into topics beforehand. At the very least an AI tool allows for some questioning and push back, which is a step up from the historical one-directional form of information.

a day ago | parent [-]
[deleted]