Remix.run Logo
keiferski a day ago

Seems like a real lack of nuance in these types of conversations. Personally I feel like AI has both directly and indirectly helped me improve my intelligence. Directly by serving as an instantaneous resource for asking questions (something Google doesn’t do well anymore), making it easier to find learning materials, and easily reorganizing information into formats more amenable to learning. Indirectly by making it easier to create learning assets like images, which are useful for applying the picture superiority effect, visualizing information, etc.

At the end of the day, it is a tool and it depends on how you use it. It may destroy the research ability of the average person, but for the power user it is an intelligence accelerator IMO.

haswell a day ago | parent | next [-]

> It may destroy the research ability of the average person

In this “post-truth” era, I think this is deeply concerning and has the potential to far outweigh the benefits in the long run. People are already not good at critically evaluating information, and this is already leading to major real world impact.

I say this as someone who has personally found LLMs to be a learning multiplier. But then I see how many people treat these tools like some kind of oracle and start to worry.

OtherShrezzing a day ago | parent | next [-]

I have some incomplete thoughts that the rise in LLMs is in part driven by society's willingness to accept half-accuracies in a post-truth world.

If the societies of 2005 had the technologies of 2025, I expect OpenAI/Anthropic etc would have a much more challenging time convincing people that "convincingly incorrect" systems should send Nvidia to a $1tn+ valuation.

keiferski a day ago | parent | prev [-]

I guess in my experience the people that are that influenced by AI answers…weren’t exactly doing deep research into topics beforehand. At the very least an AI tool allows for some questioning and push back, which is a step up from the historical one-directional form of information.

a day ago | parent [-]
[deleted]
latexr a day ago | parent | prev | next [-]

> At the end of the day, it is a tool and it depends on how you use it. It may destroy the research ability of the average person, but for the power user it is an intelligence accelerator IMO.

You live in a planet with billions of other humans. Maybe you are using LLMs carefully and always verifying outputs but most people definitely are not and it is naive to believe that is only their problem. It will soon be your problem too, because what those people do will eventually come back to bite you.

An unrelated quote from John Green feels appropriate:

> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.

One day you’ll be deeply affected by a code bug or clerical decision caused by someone who blindly accepted the words of whatever LLM they were using. An LLM which can itself be created with specific bias, like denying the existence of a country, rejecting scientific consensus, or simply trying to sell you a product.

CompoundEyes a day ago | parent | prev | next [-]

I agree with the power user view too. AI wouldn’t exist if it weren’t for the heightened personality trait of some to ask why, how, what if and reinterpret to push arts, science, technology forward. We don’t need everyone to do that. Also I think it can help us solve problems that are on the edge of being “unstuck” from which new ones that require human ingenuity will emerge. Let’s spend our time solving those novel problems for which AI has no pattern to apply

a day ago | parent | prev [-]
[deleted]